sasha HF Staff commited on
Commit
b1137c3
·
1 Parent(s): dbac45c

adding methodology

Browse files
Files changed (1) hide show
  1. app.py +11 -1
app.py CHANGED
@@ -80,7 +80,17 @@ with gr.Blocks() as demo:
80
  "## Explore the data from 'When we pay for cloud compute, what are we really paying for?'"
81
  )
82
  with gr.Accordion("Methodology", open=False):
83
- gr.Markdown("TODO")
 
 
 
 
 
 
 
 
 
 
84
  with gr.Row():
85
  gr.Markdown("## Energy Data")
86
  with gr.Row():
 
80
  "## Explore the data from 'When we pay for cloud compute, what are we really paying for?'"
81
  )
82
  with gr.Accordion("Methodology", open=False):
83
+ gr.Markdown(
84
+ """
85
+ In order to do our analysis, we gathered data from 5 major cloud compute providers – Microsoft Azure, Amazon Web Services, Google Cloud Platform,
86
+ Scaleway Cloud, and OVH Cloud – about the price and nature of their AI-specific compute offerings (i.e. all instances that have GPUs).
87
+ For each instance, we looked at its characteristics – the type and number of GPUs and CPUs that it contains, as well as the quantity of memory
88
+ it contains and its storage capacity. For each CPU and GPU model, we looked up its **TDP (Thermal Design Potential)** -- its power consumption
89
+ under the maximum theoretical load), which is an indicator of the operating expenses required to power it. For GPUs specifically, we also looked
90
+ at the **Manufacturer's Suggested Retail Price (MSRP)**, i.e. how much that particular GPU model cost at the time of its launch, as an indicator
91
+ of the capital expenditure required for the compute provider to buy the GPUs to begin with.
92
+ """
93
+ )
94
  with gr.Row():
95
  gr.Markdown("## Energy Data")
96
  with gr.Row():