Healthcare’s increased investment in artificial intelligence has turned the industry’s attention to finding ways to maximize the value of AI deployment. One important example is data processing. There’s valuable information to be gleaned from sensors and medical devices operating at the edge, but near-real-time analysis has proved difficult without sending data to the cloud and back.
That’s beginning to change. At Lenovo’s Tech World event at CES 2026, Lenovo announced three servers designed to support AI inferencing at the edge. The goal: Run large language models in environments where power consumption is at a premium and round trips to the data center increase latency and post privacy risks.
“You’re able to gain insight where the data’s collected and then take action. That helps clinicians solve problems as quickly as possible and do the things that matter for their patients,” says Dr. Justin T. Collier, healthcare CTO for North America at Lenovo. Inference servers occupy less space and don’t require typical data center infrastructure — or the heating, cooling and cubic‑footage concerns that come with it.
DISCOVER: Lenovo can help healthcare organizations meet the new data and performance demands of AI.
AI Inferencing at the Edge Provides Immediate, Localized Decision-Making
Lenovo defines edge AI infrastructure as the hardware, software and networking services that make AI processing at the edge of the network possible. Where traditional cloud AI works well for large-scale model training and data storage, “edge AI focuses on immediate inference and localized decision-making,” according to Lenovo’s website.
Lenovo notes that edge AI is well suited for analyzing protected health information, and it’s largely unaffected by connectivity disruptions that can render cloud-based clinical applications inaccessible. The company has emphasized form factor, Collier says, aiming to strike a balance among often competing needs such as power consumption, heat management, size and performance.
Collier — who has a background in physical rehabilitation for patients following serious injuries — notes several use cases for AI inferencing at the edge.
- Supply chain and warehouse management. When clinicians don’t have the supplies they need, whether it’s heated blankets or specialty therapies, the patient experience suffers. Real-time insight into supply levels can help health systems predict and mitigate potential shortages, minimize waste and optimize resource allocation through more efficient ordering.
- Physical security. Research has shown that healthcare workers are up to five times more likely to suffer workplace violence than workers in private industry, Collier notes. Inference at the edge, where video cameras have been deployed, allows for real-time analysis. “Milliseconds can matter in terms of getting an appropriate response.”
- Primary and home-based care. There are many well-documented scenarios for analyzing trends in patient data such as weight, blood pressure and A1C levels. Collier also points to opportunities in senior care, such as the use of unobtrusive radar sensors that monitor a patient’s gait to detect potential issues and prevent falls — and the many complications that stem from them.
- Bedside data management. Patients in the emergency department or intensive care unit may have more than a dozen devices monitoring movement or vital signs. There are likely many insights to be gained from looking at this information, but time is of the essence. “You want to be able to bring all of those data feeds at the bedside together and analyze them in as close to real time as you can, without going all the way to the cloud and back,” Collier says.
Click the banner below to read the new CDW Artificial Intelligence Research Report.
Governance, Patient Input and Future Use Cases All Matter
As with any technology deployment, Collier says, organizations considering AI inferencing at the edge must look toward the future.
“It shouldn’t be a point solution. It should be a foundation for the next expansion of your technology implementation,” he says. “Think about what you can do today that will let you do more advanced things tomorrow.”
The first step to getting that right is having proper data and technology governance in place. Health systems should convene a broad group of stakeholders, Collier says, and they must be part of the design and implementation process. Otherwise, organizations risk wasting time and money acquiring a solution that no one’s interested in using.
Critically, the stakeholder group should include patients, Collier says. One reason is to offer the perspective of individuals who have gone through specific medical journeys and would be on the receiving end of AI-supported clinical decision-making.
Patient perspective is also important for framing how the organization communicates the benefits of leveraging AI inferencing at the edge for care delivery in terms of the most common privacy and ethical concerns patients may have.
“If you’re prepared to say, ‘When we monitor you this way, we can detect the instant you show signs of sepsis or even predict that you may be at risk for sepsis,’ then it’s a different conversation than saying, ‘We’re going to monitor your data and store it forever,’” Collier says.
Along those lines, he adds, healthcare organizations will need to update patient consent forms to indicate that AI tools may be used to support patient care.
Brought to you by: