The Generative AI opportunity for IoT (Internet of Things) Part Two

Generative AI for IoT risks

The use of generative AI for IoT (Internet of Things) is poised to revolutionize business operations, automation, and decision-making. The convergence of unstructured information with structured data is driving a fundamental shift in how businesses extract value from IoT ecosystems. By combining structured and unstructured data, generative AI for IoT brings a variety of new capabilities and intelligence to enhance processing and analysis of operational data. Furthermore, this convergence enables generative AI to transform IoT into a more intelligent, context-aware system.

This second part of the two part article discusses the risks that generative AI for IoT brings. Part One discussed the convergence of generative AI with IoT, and highlighted some types of opportunities that are available. This blog is part of a continuing series of articles aimed at providing senior leaders and managers with a practical working knowledge of artificial intelligence.

Risks and Considerations for Generative AI in IoT

While generative AI for IoT presents transformative opportunities, it is still an evolving technology. Businesses using this technology face four broad categories of challenges and risks. Adopters must understand these risks and consider safeguards to mitigate and protect their businesses from unintended consequences.

Generative AI risks for IoT

Security and Privacy Risks

This category of risks are concerned with threats that compromise the integrity, confidentiality, and availability of data and systems. AI-driven IoT devices continuously collect, process, and transmit vast amounts of sensitive data, making them prime targets for cyberattacks, adversarial manipulation, and unauthorized access. Generative AI amplifies these risks by potentially generating misleading outputs, exposing sensitive information, or being exploited through adversarial attacks. Privacy concerns arise when AI models inadvertently memorize or infer personally identifiable information (PII) from IoT-generated data, leading to regulatory and compliance violations. To mitigate these risks, businesses must adopt secure AI architectures, encrypted data pipelines, federated learning, access controls, and adversarial robustness testing to ensure both security and privacy in AI-powered IoT ecosystems.

Data privacy and unauthorized use of proprietary data

IoT data may be sensitive, and generative AI models trained on proprietary datasets risk exposing confidential business and personal information. For example, a smart agriculture solution collects data from its customers to refine and optimize its AI model. Some of this data may reflect the results of a grower’s proprietary expertise and knowledge. Other users benefit when the solution’s generative AI systems offer recommendations based on learnings from the grower’s proprietary knowledge. 

In other cases, IoT data may contain the users’ private information, including personally identifiable information (PII). Healthcare IoT devices collect a variety of patient information and inform on treatment plans. These treatment plans may be based on the unauthorized use of IoT data obtained from other patients, analyzed and used to develop recommendations. The use of personal and private data violates data protection regulations, such as the Healthcare Insurance Portability and Accountability Act (HIPAA), the California Consumer Privacy Act (CCPA) and the European Union General Data Protection Regulation (GDPR). Businesses found in violation of these regulations are subject to fines and lawsuits.

Private and proprietary data may get into training datasets in a number of ways. The data may be stolen through cybersecurity breaches of the devices and its data storage locations. In other cases, proprietary data may be unintentionally exposed. For example, Samsung employees unintentionally released confidential information to ChatGPT on three separate occasions. ChatGPT uses the data collected from prompts to train and refine its AI models. [7]

Businesses need to consider ways of protecting the proprietary data collected from IoT devices, as well as mitigate the impact of using unauthorized data in its generative AI systems. These measures may include, but not limited to, collecting only the minimum data needed from IoT systems, implementing data governance and security practices, managing third party vendors and data suppliers, and educating employees and suppliers.

Reliability and Safety Risks

Reliability and safety risks are concerned with the potential for AI-powered IoT systems to produce incorrect, unpredictable, or delayed responses that compromise system dependability and physical safety. Unlike traditional AI, generative AI (GenAI) can create novel outputs that may be inaccurate, inconsistent, or unverifiable, leading to operational failures or hazardous situations. For example, an industrial IoT system relying on GenAI for predictive maintenance might generate misleading recommendations, causing premature shutdowns or equipment failures. In critical applications such as healthcare, autonomous vehicles, or smart grids, unreliable AI decisions can pose life-threatening risks. To mitigate these issues, businesses must implement robust validation mechanisms, real-time monitoring, human oversight, and hybrid AI approaches that combine deterministic models with GenAI to ensure accuracy and system resilience.

Hallucinations and Misinformation

One of generative AI’s capabilities is creating reports, strategies, and other content based in part on IoT data. However, generative AI can also produce factually incorrect or misleading content, posing risks in critical decision-making, customer communications, and automated workflows. For example, it may analyze anomalous machine condition data and make an incorrect recommendation. Using these incorrect information and recommendations may lead to adverse outcomes and results, including injury and harm. Hallucinations are caused by a variety of possible factors, including training model limitations, the probabilistic nature of Large Language Models (LLMs), limited access to real-time information, gaps in context understanding, and variability in user input prompts. Applying the incorrect information without confirming its accuracy will lead to improper, inaccurate or irrelevant outcomes and results. In addition to considering a number of approaches to minimize hallucinations, businesses will need to implement policies and processes to review and confirm the information before using it.

Operational and Business Risks

Operational and business risks are concerned with challenges that impact efficiency, scalability, cost-effectiveness, and long-term viability of AI-driven IoT implementations. Generative AI introduces high computational demands, increasing infrastructure costs and energy consumption, which may make widespread deployment unsustainable. Additionally, generative AI’s black-box nature can lead to decisions that are difficult to explain, reducing trust and making it harder for executives to assess risks and justify investments. Poorly implemented AI models can also create inefficiencies, such as false positives in anomaly detection or unnecessary interventions in automated processes, leading to disruptions and financial losses. To mitigate these risks, businesses must prioritize cost-effective AI architectures, explainable AI (XAI) models, strong governance frameworks, and strategic oversight to align AI investments with business objectives while ensuring reliability and return on investment.

Inability to explain generative AI outcomes.

Generative AI models often act as ‘black boxes,’ meaning their decision-making processes and how they arrive at a response are not always understood. The inability to explain outcomes generated by AI systems analyzing IoT data presents a significant challenge for businesses, particularly in high-stakes environments like healthcare, manufacturing, and infrastructure management. For example, in industrial IoT applications, a generative AI model might flag an impending equipment failure based on sensor anomalies. However, without an interpretable rationale, maintenance teams may reject the recommendation, leading to either unnecessary downtime or overlooked critical issues. 

This non-explainability stems from several factors, including the complexity of neural network architectures, the non-linear relationships between IoT data inputs and AI-generated outputs, and the dynamic nature of IoT environments where real-time data fluctuations can influence AI-driven conclusions. Moreover, the lack of transparency of these models makes it difficult to detect errors or biases, increasing the risk of unintended consequences.

To mitigate these risks, businesses should implement measures such as explainable AI (XAI) techniques, which provide transparency into how models derive their outputs. Additionally, combining generative AI with rule-based or traditional statistical models can help validate AI-driven insights against known benchmarks. Regular model audits, human-in-the-loop validation, and robust documentation of AI decision processes are also critical for ensuring trust and accountability in AI-powered IoT applications.

Cost and ROI uncertainty

Cost and ROI uncertainty is a significant risk for businesses considering generative AI for IoT, as both the initial investment and long-term financial impact can be difficult to predict. One of the primary causes of this uncertainty is the high variability in AI deployment costs, which depend on factors such as the volume of IoT data being processed, the computational resources required, and ongoing model maintenance. Unlike traditional AI models with more predictable cost structures, generative AI often demands continuous retraining and fine-tuning to remain effective in dynamic IoT environments. Additionally, hidden costs—such as integrating AI with legacy IoT infrastructure, addressing data quality issues, or hiring specialized talent—can escalate expenses beyond initial projections. For example, a logistics company implementing generative AI for route optimization may find that cloud computing costs surge as the model scales to handle real-time traffic and weather data from thousands of IoT devices.

ROI uncertainty further complicates decision-making, as businesses may struggle to quantify the direct financial benefits of generative AI-powered IoT solutions. While some use cases, such as predictive maintenance, offer clear cost savings by reducing downtime, other applications—like AI-generated insights for supply chain optimization—may require longer time horizons to yield measurable returns. Additionally, if AI models produce unreliable or biased results, businesses risk making costly decisions based on flawed insights, potentially negating any anticipated efficiency gains. To mitigate these risks, businesses should start with targeted pilot programs that allow them to measure the impact of generative AI in a controlled setting before committing to full-scale implementation. They should also establish clear success metrics tied to business outcomes, such as operational efficiency improvements, cost reductions, or revenue growth. Finally, adopting a flexible cost structure—such as pay-as-you-go cloud AI services—can help manage financial risks while ensuring that investments align with actual value generation over time.

Technology maturity

Generative AI is still an emerging technology, and its immaturity presents several risks for businesses looking to integrate it into their operations. Many generative AI models are trained on general datasets that may not be fully optimized for the specific nuances of IoT data, leading to unpredictable or suboptimal outcomes. Additionally, the technology is rapidly evolving, meaning that models and best practices in use today may become outdated or require significant revisions in just a few months. This creates challenges for businesses investing in AI-powered IoT solutions, as they may face difficulties maintaining compatibility, ensuring long-term reliability, and avoiding vendor lock-in with proprietary or soon-to-be obsolete models. For example, a manufacturing company implementing generative AI for predictive maintenance might find that early versions of the model struggle with real-world sensor noise, leading to false positives or undetected failures.

To mitigate these risks, managers should take a phased approach to adoption—starting with pilot projects before committing to full-scale deployment. They should also prioritize flexibility by choosing AI solutions that allow for model retraining and updates as the technology matures. Engaging with AI vendors that have strong research and development roadmaps, investing in internal AI expertise, and establishing cross-functional AI governance teams can further help businesses navigate the uncertainties of generative AI’s evolution while maximizing its potential benefits.

Ethical and Regulatory Risks

Ethical and compliance risks in the context of AI and IoT refer to the potential for AI-driven IoT systems to violate legal, regulatory, or ethical standards, leading to reputational damage, legal penalties, and loss of stakeholder trust. Generative AI (GenAI) can amplify biases, generate misleading or discriminatory outputs, and make decisions that lack transparency, raising concerns about fairness, accountability, and trustworthiness. Additionally, AI-powered IoT systems often process sensitive data, creating risks of privacy violations and non-compliance with regulations such as GDPR, HIPAA, and sector-specific mandates. Without proper oversight, AI models may also generate content that misrepresents facts or conflicts with corporate values. To mitigate these risks, businesses must implement AI governance frameworks, ethical AI principles, bias detection mechanisms, regulatory audits, and human-in-the-loop oversight to ensure compliance and responsible AI deployment.

Regulatory Compliance

Laws and regulations around AI, including generative AI, are still developing, and businesses risk non-compliance with future legal frameworks and liability from harm caused by the use of these systems. These future regulations may be concerned with what data can be collected, and how it is used. Others are concerned with how and where AI can be used, and what it can be used for.

For example, California’s AI Transparency Act requires that a covered provider to identify content as AI-generated and provide information regarding the “provenance of the content created by the generative AI system”. [1] Similar legislation is being considered in a number of other states. Proposed legislation in Washington (HB 1170), Florida (HB 369), Illinois (SB 1929), and New Mexico (HB 401) would require “generative AI providers to include watermarks in AI-generated outputs and provide free AI detection tools for users.” [2]  Legislation concerning the development and use of large language models is also being considered. California (AB 222) would establish safeguards for the development of these models, while Illinois (HB 3506) would required developers to conduct risk assessments every 90 days. Rhode Island (H 5224) imposes strict liability of covered models for all “injuries to non-users that are factually and proximately caused by the covered model.” [3]

Different geographic regions, such as the European Union, have different regulations and frameworks concerning the use of AI. For example, the European Union’s (EU) AI Act has specific provisions for generative AI systems. This includes safeguards against the generation of content that violates EU laws, documenting the use of copyrighted training data and complying with transparency obligations. [4]

Furthermore, the EU regulations require that generative AI comply with obligations imposed on foundation models (large language models). These additional obligations include: [5]

  • Mitigating risks to health, safety, fundamental rights, environment, democracy and the rule of law
  • Using unbiased datasets
  • Ensuring performance, predictability, interpretability, corrigibility, safety and cybersecurity over its lifecycle
  • Establishing a quality management system to ensure and document compliance with the EU Act

Regulated industries, such as healthcare, insurance and energy, have additional restrictions that further how data and AI can be used. For example, the state of Massachusetts (HD 3750) is considering legislations that would required healthcare insurance carriers to disclose the use of AI for reviewing insurance claims and report AI and training data information. [6] 

Businesses should consider proactive measures now to mitigate risks associated with proposed AI regulations. These measures include staying informed about regulatory developments, developing and implementing a risk management framework, identifying potential areas of risk across the AI lifecycle (development to operation) and conducting periodic risk assessments and audits. 

Ethical AI and Bias Mitigation

IoT devices generate vast amounts of real-time data, but the reliability and representativeness of that data are critical factors in the outcomes produced by generative AI systems. If IoT data is biased—whether due to sensor placement, environmental conditions, or demographic imbalances—the AI model trained on or augmented by that data will likely produce skewed outputs. For example, in healthcare applications, if sensors used for remote patient monitoring primarily collect data from one demographic group (e.g., younger patients or those with mild conditions), the generative AI system may struggle to generate insights that are relevant or safe for underrepresented populations, such as elderly patients or those with more complex health needs. This can lead to inaccurate predictions, inappropriate recommendations, or disparities in care.

Compounding this issue, IoT data biases often go unnoticed because they emerge subtly over time. Data drift—where sensor readings shift due to device wear, environmental changes, or network inconsistencies—can introduce distortions that generative AI models do not automatically detect. In industrial or operational settings, such as predictive maintenance in manufacturing, biased IoT data can cause AI-generated insights to favor certain equipment types over others, potentially leading to unexpected failures or inefficient resource allocation. To mitigate these risks, businesses must ensure that IoT data collection strategies prioritize representativeness, consistency, and real-world validation. This includes implementing rigorous data auditing, using adversarial testing to uncover hidden biases, and continuously refining the AI model with updated and representative datasets.

Conclusion

The fusion of generative AI and IoT represents a transformative opportunity for businesses, unlocking new levels of automation, insight generation, and decision-making. Businesses currently using IoT in its operations should thoughtfully examine and study how and where generative AI can improve and enhance existing operations. However, as with any emerging technology, strategic adoption requires careful planning. Business leaders must consider measures that allow them to navigate challenges such as regulatory uncertainty, data privacy concerns, and AI model accuracy and reliability.

References

[1] SB942. California Transparency Act. September 19, 2024. Link.

[2] J. Johnson, J. Ponder and A. Gweon. “State legislatures consider new wave of 2025 AI legislation.” Covington, February 21, 2025. Link.

[3] ibid.

[4] M. Barani and P. Van Dyck. “Generative AI and the EU Act – a closer look.”  A&O Sherman, August 22, 2023. Link.

[5] ibid.

[6] See note [2]

[7] M. DeGeurin, “Oops: Samsung Employees Leaked Confidential Data to ChatGPT”, Gizmodo, April 6, 2023. Link.

This is fourth in a series of blogs aimed at providing senior leaders and managers in mid-market organizations with a practical working knowledge of artificial intelligence (AI). If there are specific topics you wish for us to address in the future, please comment below.

Thanks for reading this post. If you found this post useful, please share it with your network. Please subscribe to our newsletter and be notified of new blog articles we will be posting. You can also follow us on Twitter (@strategythings), LinkedIn or Facebook.

Related posts:

Video Interview: The generative AI opportunity for IoT

The generative AI opportunity for IoT Part One

Different types of AI systems: A primer

The key to successful AI projects: Start with the right problems

Leave a Reply

Your email address will not be published. Required fields are marked *