Graphcore and Cirrascale jointly release Graphcloud

Bristol, UK, 26 January 2021 – Graphcore today announced that it has taken another step forward in helping clients accelerate innovation and leverage artificial intelligence at scale.

Graphcore and Cirrascale jointly release Graphcloud

Graphcore and Cirrascale Cloud Services (Cirrascale) join forces to bring a whole new dimension to artificial intelligence in the cloud: with the first publicly released, second-generation IPU-based IPU-POD (MK2 IPU-POD) scale-out cluster, no need Owning and operating data centers, we provide an easy way to add computing capacity on demand.

Graphcore recognizes that artificial intelligence technology presents a huge opportunity to the market, but also a unique set of computational challenges: the size of models is rapidly increasing, and the standards of accuracy are constantly improving. If customers want to take advantage of the latest innovations, they need a tightly integrated hardware and software system built specifically for AI.

Graphcloud is a secure and reliable IPU-POD family of cloud services that enables customers to access the power of Graphcore IPUs as they scale from experiments, proof-of-concept and pilot projects to larger production systems.

At launch, Graphcore had two available products. In the coming months, Graphcore will offer a larger scale-out system:

IPU-POD16: Provides 4 PetaFLOPS of AI computing (4 IPU-M2000s, i.e. 16 Colossus MK2 GC200 IPUs)

IPU-POD64: Provides 16 PetaFLOPS of AI computing (16 IPU-M2000s, i.e. 64 Colossus MK2 GC200 IPUs)

Compared to state-of-the-art GPU systems, the IPU-POD system reduces the overall training cost and reduces the solution time.

The Graphcloud system instance comes pre-installed with Poplar and system software. Sample code and application examples are available locally and for those advanced models used in the Graphcore benchmark, including BERT and EfficientNet. Users also have access to comprehensive documentation that helps them quickly enable multiple frameworks including PyTorch and TensorFlow.

UK-based Healx was one of the first Graphcore customers to use Graphcloud, its AI-based drug discovery platform looking for new treatments for rare diseases. The company won the “Best Use of AI in Health and Medicine” award at the 2019 AI Awards.

Dan O’Donovan, Head of Machine Learning Engineering at Healx, said: “We started using the IPU-POD16 on Graphcloud in late December 2020, migrating our existing MK1 IPU code to run on the MK2 system. This process was flawless. hurdles, and brings huge performance benefits. Having more storage for the model means we no longer need to shard the model and can focus on slicing the data. This makes the code simpler and the model training more efficient.”

He also noted: “In our partnership with Graphcore, Graphcore has always provided us with access to the latest hardware, SDKs and tools. In addition, we have continued dialogue with Graphcore’s hardware and software specialist engineers through direct meetings and support services. communicate.”

Regarding the launch of Graphcloud, Nigel Toon, co-founder and CEO of Graphcore, said: “Whether users are evaluating our hardware and Poplar software stack for the first time, or expanding their AI computing resources, with Graphcloud, enabling IPUs has never been easier. We are delighted Partnering with Cirrascale to bring Graphcloud to the world. Graphcloud, together with key components such as our global partner program, inside sales and engineering support, forms/builds Graphcore’s full range of products and services.”

“Cirrascale is very proud of its strategic partnership with Graphcore, which furthers the era of cloud-based machine learning solutions, leading to new, large-scale commercial deployments with Fortune 500 companies,” said PJ Go, CEO of Cirrascale. .”

Pricing and Specifications

Available system types

IPU-POD16 4 IPU-M2000 systems

IPU-POD64 16 IPU-M2000 systems



Both systems utilize Graphcore’s unique IPU-Fabric™ interconnect architecture. IPU-Fabric™ is designed to eliminate communication bottlenecks and allow thousands of IPUs to operate on machine intelligence workloads as a single high-performance, ultra-fast cohesive unit.

Each IPU-POD64 instance is backed by 4 Dell R6525 host servers with dual-socket AMD EPYC2 CPUs used by the most powerful local AI data center systems, while each IPU-POD16 has an identical Dedicated servers of specifications.

Provides 16TB and 4TB of secure local NVMe storage for IPU-POD64 and IPU-POD16, respectively.

Each IPU-POD64 offers 57.6GB of in-processor storage and 2048GB of streaming storage (32 x 64GB DIMMs).

· Each IPU-POD16 provides 14.4GB of in-processor storage and 512GB of streaming storage (8 x 64GB DIMMs).

The Links:   7MBR25SC120-50 LM190E08-TLJ8 PM150RSE120