Advancement in FPGA technologies has lead to a context where these devices are no longer used just in a “single System-on-Chip” scenario but they are now used as a computational element, maybe even in the form of a PROGRAMMABLE SYSTEM-ON-MULTIPLE CHIPS, in a complex heterogeneous computing system. As shown in the figure, we are now working with a board, a card, that can host more than one FPGA. The overall system, at least in this figure, is composed by one host... but this is just a simplification that can be easily removed leading us to a scenario where, on one hand, we can have multiple bords/cards connected to one host or, on the other hand, we may have multiple hosts sharing the access to the same card or set of cards. Moreover, this multiple-hosts/multiple-cards architecture can be seen as the base node for a distributed infrastructure where multiple of these nodes can be connected together to realise the finale computing infrastructure. Different companies, for different reasons, have demonstrated interest for such Heterogeneous Complex Distributed Systems. Examples of this are: . IBM with the Power8 architecture or, generally speaking, with what we can consider the Power8 ecosystem like the Coherent Accelerator Processor Interface and the OpenPOWER Foundation . Microsoft with Project Catapult, where FPGAs are being deployed in Microsoft’s datacenters, including those supporting the Azure cloud, to accelerate processing and networking speeds . Amazon with Elastic Compute Cloud with the F1 instances, making customisable FPGAs for hardware acceleration generally available to a wide audience of designers . Ryft with the Ryft One project, a Big Data infrastructure obtained via a Xilinx FPGA-accelerated architecture. Back to he IBM Power8 processor, this architecture incorporates facilities to integrate it more easily into custom designs. It introduces the Coherent Accelerator Processor Interface (CAPI): it is layered on top of PCI Express 3.0 and which has been designed to integrate easily with external coprocessors like GPUs, ASICs and FPGAs. Related to the Power8 technology, the OpenPOWER Foundation was founded in 2013 as an open technical membership organization with the goal of enabling the server vendor ecosystem to build their own customized server, networking and storage hardware for future data centers and cloud computing. As it can be found from the OpenPOWER Foundation website, member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. IBM is looking to offer the Power8 chip technology and other future iterations under the OpenPOWER initiative, but they are also making previous designs, like processor specifications, firmware and software, available for licensing. The Coherent Accelerator Processor Interface (CAPI) has spawned an open technology group all of itself in October 2016. The OpenCAPI Consortium was founded by several OpenPOWER members together with Dell EMC, AMD and Hewlett Packard Enterprise. The idea behind the Project Catapult was to utilise Altera, now Intel FPGA, to accelerate Bing and Azure. Instead of using potentially less efficient software as the middle man, with FPGAs Microsoft engineers were allowed to implement their algorithms directly onto the hardware they were using. By doing this with a programmable device, instead of using an ASIC, they were gaining advantages by the device reconfiguration! Field Programmable Gate Arrays are the perfect technologies to adapt to runtime unexpected modifications. They can be configured at a moment’s notice to adapt to new advances in artificial intelligence or to respond to another type of unexpected need in a datacenter. As in the case of the OpenPower Foundation we can find something “similar” also related to the Project Catapult. The Project Catapult Academic Program! This program allows researchers worldwide to investigate new ways of using interconnected FPGAs as computational accelerators — a unique opportunity to access custom data center systems for high-demand research. The Project Catapult Academic Program is an initiative created by the collaboration between the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, and Intel. Another example is the Amazon Elastic Compute Cloud F1: it is a compute instance with FPGAs that can be programmed by Amazon customers to create custom hardware accelerations for their applications. Once the FPGA design is ready, and this can be done by using an FPGA Developer 64-bit Amazon Machine Image, the user has to register his/her design as an Amazon FPGA Image (AFI), and deploy it to the F1 instance. So far, Amazon F1 instances are available in two different instance sizes that include up to eight Xilinx UltraScale Plus FPGA per instance with local 64 GB DDR4 protected memory, with a dedicated PCI-e x16 connection to the instance. One thing which is really interesting in using the F1 instances and the Amazon ecosystem is that, once the Amazon FPGA Image has been developed, this can be offered on the AWS Marketplace for other customers to purchase. Previously, I did also mentioned Ryft with the Ryft One project. It is interesting to see how Amazon has impacted the research done by others and this is exactly the case of Ryft. After the launch of the Amazon F1 instances, Ryft started to work, and it is now offering, Ryft Cloud, an accelerator for data analytics and machine learning that extends Elastic Stacks. As it can be read on the Amazon Web Services website, Ryft Cloud sources data from Amazon Kinesis, Amazon Simple Storage Service, Amazon Elastic Block Store, and local instance storage and uses massive bitwise parallelism to drive performance. Moreover, it also supports high-level JDBC, ODBC, and REST interfaces along with low-level C, C++, Java, and Python APIs.