![lyrics over the edge l.a. guns lyrics over the edge l.a. guns](https://images.eil.com/large_image/L.A._GUNS_HELLRAISERS%2BBALL:%2BCAUGHT%2BIN%2BTHE%2BACT-436209.jpg)
Some conflate the edge with the private cloud, but this is a mistake. We get into what attributes the storage at the edge needs to have below - but it is important to note that legacy SAN/NAS systems are inflexible and often incompatible for these use cases since data processing applications are adopting S3 API natively.Ī quick note on the private cloud. As a result, object storage is the storage of choice for the edge. Object storage is the storage of choice in the cloud. As a result, you will need the same storage on each end - at the edge and in the cloud. It will accumulate quickly and in the case of autonomous vehicles will be multiple PBs in no time. It makes sense to distribute the training and process geographically to be as close to the devices as possible.Įventually, that data will end up in the cloud (public or private).
![lyrics over the edge l.a. guns lyrics over the edge l.a. guns](https://i.ytimg.com/vi/AlldlVDHTQI/maxresdefault.jpg)
Once the model is trained, it can be sent back to the car and used to make decisions and draw conclusions from new data coming in from sensors. In this case, the data is sent to an edge data center to build and train the machine learning models. Cars don’t have the compute resources internally to do the training, which is the most GPU-intensive part. The purpose of collecting data is to build and train machine learning models. Let’s use an example of a car producing data from sensors - an area where MinIO has considerable deployment expertise. In each case, there is enough storage and computation onsite or in an economically proximate location to learn from the data. It is the default design for manufacturing use cases as well as 5G use cases. It is used for facial recognition systems. This model is employed by restaurants like Chick-fil-A. To complete the architecture, one would add a load balancer, another layer of Kubernetes and then have an origin object storage server and the application layer (training the models, doing large scale analytics etc) in a more centralized location.
![lyrics over the edge l.a. guns lyrics over the edge l.a. guns](https://i.ytimg.com/vi/4BwbwjWe8rk/maxresdefault.jpg)
Kubernetes effectively imposes the requirement that the storage be disaggregated and object based. These instances are containerized and managed with Kubernetes as data pipelines. The compute/analytics can range from something like a Splunk DSP to a deep neural network model, but the key point here is that there is ETL, processing and insight generation at the remote edge. Visually it looks like this:Īt the most remote edge you have the data producing devices, coupled with storage and compute/analytics. The goal is not to store PBs of data at the edge - rather, this model envisions 100s of GBs up to a few PBs or so. In this model, the application, compute and storage exist at the edge and are designed to store and process data in situ. The edge storage model is employed when the goal is to do your processing and analytics on the edge, filtering out the noise and retaining/sending up just the insights and data associated with those insights. This post looks at both models and articulates the storage attributes that matter to deliver against them. While there are multiple models, ultimately, we think they can be simplified down to just two: edge storage and edge cache. Bandwidth is a key consideration from an architecture perspective, and the reason why is clear: it is 4X more expensive per GB than storage (.023 vs.09 on AWS for example). Handling data properly at the edge can ensure a scalable, cost-effective and secure infrastructure - but failing to set up the right architecture can lead to data loss, security vulnerabilities and sky-high costs related to the bandwidth needed to transfer data repeatedly to and from the public cloud. Edge computing is a hot topic and carries with it some confusion, particularly around storage.