How to choose edge AI devices

Edge computing is one of the most talked about technology trends at the moment. As this trend heats up, perhaps you think it”s time to invest in smart edge technology and grow your IoT network. But before you decide to purchase an emerging edge device, let”s discuss exactly what edge computing is, what it does, and whether your application can benefit from edge technology. Edge computing can dramatically increase the flexibility, speed and intelligence of IoT networks, yet edge AI devices are not a panacea for all the challenges of smart networking applications. After helping you determine if edge technology is right for your application, this article will explore the key features and considerations to keep in mind when purchasing an edge AI device.

What is edge computing?

Edge computing takes IoT to another level. At the edge, raw data can be transformed into value in real time. By redistributing data processing efforts across the network, edge computing enables the importance and management of network nodes, endpoints and other smart devices to be improved.

Edge computing is arguably the opposite of cloud computing. With cloud computing, the data center will centrally process the data flowing in from the distributed network and transmit the results of the computation back to the distributed network to trigger operations or implement changes. However, transmitting large amounts of data over long distances requires consideration of money and time costs, as well as power consumption.

This is where edge computing comes in: when power, bandwidth and network latency are critical, edge computing is the solution. Whereas with centralized cloud computing, data can travel hundreds of kilometers before it can be processed, edge computing can process data at the edge of the same network where it was grabbed, created or saved. This means that the processing latency of edge computing is almost negligible, and the power and bandwidth requirements are often dramatically reduced.

One of the main drivers of today”s edge computing development is semiconductor manufacturers, as advances in semiconductors enable chips to increase processing power without significantly increasing power consumption. Processors located at the edge can do more processing of the acquired data without consuming more power. In this way, more data can stay at the edge without having to be transmitted to the core. As a result, edge computing not only reduces total system power consumption, but also shortens response times and better protects data privacy.

Technologies such as artificial intelligence (AI) and machine learning (ML) also benefit from edge computing: they too need to reduce the cost of data acquisition while improving data privacy and security, which can be addressed through edge processing. Traditionally, technologies such as AI and machine learning have required massive amounts of resources to run, far beyond the magnitude typically available from endpoints or smart devices. Today, however, advances in hardware and software have the potential to embed these enabling technologies into smaller, more resource-constrained devices at the edge of the network.

Evaluating edge AI

Careful evaluation is necessary before selecting a platform that can perform edge processing and run AI algorithms or machine learning inference engines. Simple sensors and actuators, even those that need to be applied in the IoT, can be implemented with smaller integrated devices. Increasing the amount of edge execution processing requires a more powerful platform with a highly parallelized architecture applied. This typically means the use of graphics processors (GPUs), but if the platform is too powerful, it also puts a burden on the limited resources at the edge of the network.

In addition, edge devices are fundamentally an interface to the real world, and therefore need to be compatible with some common interface technologies such as Ethernet, GPIO, CAN, serial and/or USB, and support peripherals such as cameras, keyboards and monitors.

In contrast to data centers where environmental factors are controlled, the edge environment can be very different: edge devices may be exposed to extremes of temperature, humidity, vibration, and even plateaus. These factors will influence equipment selection and how it is packaged or installed.

Another important aspect to consider is regulatory requirements. Any device that uses radio frequency (RF) for communication will be subject to regulations and may require a permit to use. Some platforms can be used “out of the box,” but others may require more effort. Once a platform is in use, hardware upgrades are unlikely, so it is prudent to design the platform with processing power, memory and storage in mind to allow for future performance improvements.

This includes software upgrades. Unlike hardware, software updates can be deployed when the device is not in the field. Today, this over-the-air (OTA) update approach is very common, and most edge devices are likely to support OTA updates in the future.

Choosing the right solution requires careful evaluation of all of the above points and fits the specific needs of the application. Does the device need to process video data or audio? Does it only need to monitor temperature, or does it also need to monitor other environmental indicators? Does it need to be on all the time, or will it be dormant for a long time? Will it be triggered by external events? Most of the above requirements apply to all technologies deployed at the edge, but as customer expectations for processing levels and output increase, it is necessary to expand the list of requirements as well.

Benefits of edge computing

Technically, AI and machine learning can now be applied to edge devices and smart nodes, which presents significant opportunities. This means that processing engines are not only closer to the data source, but can do more with the data they collect.

The benefits of edge computing are really quite a few. First, it can increase the productivity or efficiency of their use of data. Second, edge computing can simplify network architectures because less data needs to be moved. Third, it makes the proximity of devices to the data center less important. This last point may seem inconsequential if the data center is located in the center of a city and close to the location where the task is performed, but if the edge of the network is located in a distant location such as a farm or a water treatment plant, edge computing makes a big difference.

Data moves fast across the Internet. Most people may be surprised to learn that their search results may have circled the globe twice before showing up on the screen, because the total time taken may be a fraction of a second, which is just a snap of a finger for us. But for the sensors and actuators and other smart devices that make up connected, intelligent and often autonomous devices, every second feels like an hour.

This round-trip latency is an issue that manufacturers and developers of real-time systems need to take seriously. The time it takes for data to travel to and from the data center is not irrelevant and certainly not instantaneous, and reducing latency is a key goal of edge computing. Edge computing is capable of integrating with faster networks such as 5G. But it”s important to note that as more and more devices come online, network speedups will not solve the cumulative network latency problem either.

It is predicted that by 2030, there could be as many as 50 billion connected devices online. If every device requires broadband to the data center, the network will always be clogged. If every device operation needs to wait for data to arrive from the previous stage before it can proceed, the total latency will soon become significant. Therefore, edge computing is the only practical solution to alleviate network congestion.

However, while most applications require edge computing support, its benefits still depend heavily on the application itself. The laws of edge computing will help engineering teams determine whether edge computing is appropriate for certain specific applications.

The Four Laws of Edge Computing

It goes without saying that the first law is the law of physics. The advantage of RF energy is that it can travel at the speed of light, just like photons in a fiber optic network. The disadvantage, however, is that they cannot travel more quickly. Therefore, if the round-trip time for RF energy is still long, edge computing may be a better choice.

Ping testing provides a simple way to measure the time it takes for packets to travel between two network endpoints. Online games are often hosted on multiple servers, and gamers need to ping the servers until they find the server with the least latency for the fastest data transfer. This shows that even a tenth of a second is critical for time-sensitive data.

Network latency is not only dependent on the transmission mechanism. There are encoders and decoders at both ends of the data transmission, and the physical layer needs to convert the electrons to a certain form of energy being used and then convert them back. This process takes time even when the processor is running at GHz speeds, and the larger the amount of data being moved, the longer it takes.

The second law is the law of economics. This law is relatively more flexible, but as the demand for processing and storage resources soars, it becomes less and less predictable. Margins are already slim, and if the cost of processing data in the cloud suddenly rises, it could result in a loss.

The cost of cloud services includes buying or renting servers, racks or blades. The cost may depend on the number of CPU cores, the amount of RAM or permanent storage required, and the level of service. Services that can guarantee uptime will cost more than services that lack guarantees. Network bandwidth is essentially free, but if bandwidth is required to maintain a certain standard at all times, you will need to pay for the service, and this needs to be considered when evaluating costs.

That said, the cost of edge data processing does not fluctuate significantly. Once the initial cost of the equipment is paid, the additional cost of processing any amount of data at the edge is virtually nil.

Data has value because of the information it carries. This relates to the third law, the law of the land. Now, anyone capturing information may need to comply with the data privacy laws of the region where the data is captured. This means that even if you are the rightful owner of the data device, you may not be allowed to transfer that data across geographic borders.

Relevant regulations include the EU Data Protection Directive, the General Data Protection Regulation (GDPR), and the APEC Privacy Framework. Canada”s Personal Information Protection and Electronic Documents Act complies with EU data protection laws, while the U.S. Safe Harbor Arrangement shows similar compliance.

However, edge processing can solve this problem. By processing data at the edge, the data does not need to leave the device. Data privacy on portable consumer devices is becoming increasingly important. Facial recognition on cell phones uses local AI to process camera images, so the data never leaves the device. Similarly, closed-circuit television (CCTV) and other security surveillance systems use cameras to monitor public spaces, and the images often need to be transmitted and processed through cloud-based data servers, which raises data privacy issues. With edge computing, data can then be processed directly at the camera end, more quickly and securely, and potentially eliminating or simplifying the need for data privacy measures.

Finally, let”s consider Murphy”s Law, which states that if something can go wrong, it will eventually go wrong. Of course, even the most carefully designed systems always have the potential to go wrong. The entire process of transferring data over the network, storing it in the cloud, and processing it in the data center can have many failures, and edge processing can avoid the failures that can occur during the lengthy process.

Asking the right questions about edge computing

Even if your application can benefit from edge processing technology, there are still some questions to consider. Here are some of the most relevant questions.

1. What processor architecture is your application running on? Porting software to a different instruction set can be costly and cause delays, so upgrading does not mean using another architecture.

3. What is the operating environment of the device? Is it extremely hot, extremely cold, or both? The Mars mission is a good example of “edge processing”, with a very variable operating environment!

Does your hardware need to comply with regulations or be certified? The answer is almost certainly yes, so choosing a pre-certified platform can save time and cost.

5. How much power does the device need? System features are very expensive in terms of unit cost and installation, so it”s important to know exactly how much is “enough”.

Are edge devices constrained by form factor? This is more important in edge processing than in many other deployments, so it should be considered early in the design cycle.

How long is the service life? Will the equipment be used for industrial applications that may need to run for years, or will the lifecycle be measured in months?

8. What are the system performance requirements in terms of processing power? For example, frames per second? What are the memory requirements? What language does the application use?

9. Are there cost considerations? This is a tricky question, because the answer is yes, but knowing the cost constraints will help you make your choice.

Conclusion

Edge processing is embodied in the IoT, but there is more to it than that. It is driven by higher expectations than achieving the connected devices described above. At a basic level, devices may need to be low-power and low-cost, but now they also need to provide a higher level of intelligent operation without compromising power and cost.

Choosing the right technology partner makes it easy to select the right platform. ADLINK has a broad portfolio of edge processing solutions and works with many companies that offer complementary technologies. Welcome to the edge computing development ecosystem, and we will better help you choose the right edge computing platform for your AI applications.

Published
Categorized as News

Leave a comment