AI Data Oversight in an Era of Generative Technology
In the dynamic world of artificial intelligence (AI), unstructured data management has emerged as a key area of focus. Five crucial aspects – security, privacy, lineage, ownership, and governance of unstructured data, collectively known as SPLOG – are essential considerations for AI tools, according to recent discussions.
Preparing for AI has become the leading data storage priority for IT leaders in 2023, surpassing cloud migrations from the previous year. This shift is evident in various sectors, including the German manufacturing industry, where companies are making strides in establishing internal governance structures for AI, focusing on trusted AI, ethics, transparency, and data security. However, specific company names have not been reported.
Privacy and security violations are a significant concern for corporate AI use, with 28% of leaders citing this as their top worry. This is followed closely by the ethics of generative AI, with 64% of IT leaders expressing concerns. The potential for inaccurate or harmful outcomes due to biased, libelous, or unverified data in AI models is a substantial concern.
To address these challenges, IT and business leaders are urged to invest in data governance courses and educate employees on the safe use of AI technologies. Major cloud providers and enterprise software vendors are also offering generative AI-related solutions to cater to various use cases and business requirements.
However, running generative AI applications requires substantial resources, including high-performance computing capacity, efficient flash storage, and appropriate security systems. Organisations with stringent security and compliance requirements may opt for custom development approaches, but this requires significant investments in technology and expertise.
Cybersecurity, privacy concerns with personal data, and liability are the top three risks of generative AI, according to KPMG executives. Quality and control, safety and security risks, limiting human innovation, and human error due to a lack of understanding of the tool are primary concerns, according to a Harris Poll.
Despite these challenges, generative AI is a top business and technology strategy for many enterprises. The Salesforce State of IT report indicates that 86% of IT leaders believe generative AI will have a prominent role in their organisation soon.
Encouragingly, 90% of enterprises allow some level of AI adoption by employees, according to the 2023 Unstructured Data Management Report by the author's company. However, only 26% of IT leaders have a policy in place to govern AI, and only 21% impose restrictions on the data or applications that employees can use.
Concerns about releasing proprietary content into the large language models of generative AI technology providers are shared by 49% of respondents in an IDC white paper. Lack of data source transparency and risks from inaccurate or biased data are also significant concerns, with 21% of leaders expressing these worries.
In the face of these challenges, 40% of IT leaders are pursuing a multi-pronged approach that includes storage, data management, and security tools to protect against generative AI risks. However, only 13% of workers have been offered any AI training by their employers in the last year, according to a survey commissioned by Randstad.
As the use of AI continues to grow, it is clear that investing in education, data governance, and robust security measures will be crucial for businesses to harness the potential of AI while mitigating the associated risks.
Read also:
- Humorous escapade on holiday with Guido Cantz:
- Expands Presence in Singapore to Amplify Global Influence (Felicity)
- Amazon customer duped over Nvidia RTX 5070 Ti purchase: shipped item replaced with suspicious white powder; PC hardware fan deceived, discovers salt instead of GPU core days after receiving defective RTX 5090.
- Expanding Indian gaming market platform, PlaySuper secures a million-dollar funding for its rewards-based service expansion