vectordash - f000.backblazeb2.com · cryptocurrencies allowing you to maximize your ... ubuntu...

9
Vectordash Primer [email protected] A GPU accelerated cloud computing marketplace Vectordash

Upload: trinhque

Post on 25-May-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Vectordash [email protected]

A GPU accelerated cloud computing marketplace

Vectordash

Introducing VectordashVectordash is a cloud GPU platform that lets anyone rent out computational power.

1. The Bitcoin Network

2. Introducing Vectordash

3. Instance Pricing

4. Artificial Intelligence on Vectordash

5. References

Miners across the world earn Bitcoin by solving the Hashcash proof of work.

The world’s most powerful computing network

In less than a decade, Bitcoin miners have assembled the world’s most powerful computing network, with a hash rate exceeding 16 exahashes per second and consuming 32 TWh of electricity annually — about as much as Denmark. Bitcoin’s mining power is spent on useless computation, wasting both electricity and computational power.

Bitcoin Network Hash Rate

What if all this computational power was spent on something useful instead?

The world’s most affordable cloud GPU provider.

Backed by a network of powerful Nvidia GPUs.

Get paid to provide GPU computePowerful GPU instances for 10x cheaper

Rent out your GPU on Vectordash and earn 1.5x

more than what you would earn mining the most

profitable cryptocurrency. GPU owners can become

a host simply by running the Vectordash desktop

client. They just have to select an availability end

date denoting how long they plan on hosting for

and then their computer is listed as available for

anyone to rent out. Our desktop client also allows

for auto-switching, so while your machine is not

being utilized on Vectordash, you can still mine

cryptocurrencies allowing you to maximize your

GPU’s earnings.

Vectordash’s GPU instances cost 10x less than their

counterparts from cloud providers such as Amazon

Web Services, Google Cloud, and Microsoft Azure.

Our distributed infrastructure allows for unused

compute anywhere to be utilized by anyone who

requests it. Our aim is to become the world’s

largest cloud provider without owning any

hardware simply by providing a thin layer of

infrastructure over existing computers. This allows

us to offer high performance instances that are an

order of magnitude more affordable than current

cloud providers.

$

$

$

VectordashV

Ubuntu 16.04 deep learning image

There’s no need to spend hours installing

drivers and libraries. Instances come with

an Ubuntu 16.04 image preloaded with

CUDA, cuDNN, and popular machine

learning libraries such as TensorFlow,

PyTorch, Keras, and Caffe.

We make AI simple

Once you’ve selected an instance type,

Vectordash takes care of the rest. We’ll

give you an IP address and SSH key to

login and begin development instantly.

Simple. Powerful. Affordable.

$ ssh [email protected] -i key

Welcome to Ubuntu 16.04.3

$ python train.py

Step 1 / 300,000, train_loss = 4.504

Step 2 / 300,000, train_loss = 4.503

Step 3 / 300,000, train_loss = 4.499

Vectordash is the simplest way to get startedwith machine learning, artificial intelligence, and data science.

Vectordash GPU instances are... 8.4x cheaper than Google Cloud

The world’s most affordable cloud provider.By far.

Consumer-grade GPUs are much more powerful than

their datacenter counterparts. For instance, an Nvidia

1080 ti can train a neural network 5.5x faster than an

Nvidia Tesla K80, a popular datacenter GPU. The speed

of a GPU is taken into account while calculating the

normalized costs above. If a neural network takes 30

days to complete training on an AWS p2.xlarge

instance, it will cost $648. On Vectordash, the exact

same neural network can be trained to completion in

just 5.5 days for $38.78.

CloudProvider

InstanceType TFLOPs

DailyPrice

Normalizeddaily price

Normalized30-day price

Vectordash Nvidia 1080 ti 11.34 $7.11 $1.29 $38.70

AmazonWeb Services

p2.xlarge 2.9 $21.60 $21.60 $648.00

GoogleCloud Nvidia Tesla K80 2.9 $10.80 $10.80 $324.00

MicrosoftAzure NC6 2.9 $21.60 $21.60 $648.00

GPU Instance Pricing (as of 2/11/2018)

16.7x cheaper than Amazon Web Services

16.7x cheaper than Microsoft Azure

Significantly faster than Amazon, Google, and Microsoft

The price of instances is dynamically adjusted based on

the supply and demand of compute. Instead of being a

monopolistic cloud provider charging arbitrary prices,

we let our users decide what compute should be worth.

A marketplace for compute

Lack of compute is a limiting factor for AI development

Accelerate AI breakthroughs with Vectordash

“One very easy way of always getting our models

to work better is to just scale the amount of

compute. So right now, if we’re training on, say, a

month of conversations on Reddit, we can,

instead, train on entire years of conversations of

people talking to each other on all of Reddit.”

— Andrej KarpathyDirector of AI at Tesla

“There’s somewhat of a linear connection between

how much compute power one has, and how many

experiments one can run. How many experiments

one can run determines how much knowledge you

acquire or discover.”

— Trevor DarrellCo-Director of Berkeley AI Research Lab (BAIR)

“The surface of AI problems we can solveis limited by the hardware we have available.”

— Greg Brockman, co-founder of OpenAI

AI has the potential to become the most impactful technology humanity will ever create, giving us

the ability to solve problems once thought to be unsolvable. However, the lack of available GPU

compute remains to be the limiting factor in developing AI systems. Larger neural networks often

require multiple GPUs to train on, taking up to weeks at a time. Novel ideas are often discarded

simply because the GPU compute costs would be too high.

By enabling latent compute to be brought to market, we’ve been able to reduce GPU instance

costs by a factor of 10x. Our goal is to provide everyone with high performance compute — not

just top research labs and large organizations. More affordable compute directly translates to an

increase in groundbreaking ideas being developed, improvements upon existing ideas, and the

true democratization of AI.

Vectordash reduces GPU compute costs by 10x

References

Amazon EC2 Pricing – AWShttps://aws.amazon.com/ec2/pricing/

Dettmers, Tim. “Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning.” Tim Dettmers, 4 Sept. 2017, timdettmers.com/2017/04/09/which-gpu-for-deep-learning/.

Graphics Processing Unit (GPU) | Google Cloud Platformhttps://cloud.google.com/gpu/

Linux Virtual Machines Pricinghttps://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/

McHugh, Jim. “NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI | NVIDIA Blog.” The Official NVIDIA Blog, 14 Oct. 2016, blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-openai-elon-musk-deep-learning/.

Mooney, Chris, and Steven Mufson. “Why the bitcoin craze is using up so much energy.” The Washington Post, WP Company, 19 Dec. 2017, https://www.washingtonpost.com/news/energy-environment/wp/2017/12/19/why-the-bitcoin-craze-is-using-up-so-much-energy/?utm_term=.0329a6c90dce

NVIDIA Delivers AI Supercomputer to Berkeley | NVIDIA BlogJim McHugh - https://blogs.nvidia.com/blog/2016/12/06/ai-supercomputer-berkeley/

1.

2.

3.

4.

5.

6.

7.

$

$

$$ $

$$$

$$

$$$

$

$$$

$$ $$

$$$

$$

$$$

$

$$$

$$

$ $

$$$

$$

$$$

$

$$$ $

$ $$$ $$$ $$

$

$$$

$

$ $$$ $

$$

$

$$$

$$

$$$

$

$$

$

$

$$$

$

$

$

$$$

$

$

$$ $

$

$ $

$$

$$

$$

www.vectordash.com

Vectordash