Government efforts are showing that real-world AI solutions need clear goals, community inputs, and transparency to build trust with the public.
Businesses have immense potential with integrating generative AI, but there are hurdles with scaling these AI capabilities that must be addressed.
Gaming has the potential to create an even deeper connection with players through AI, but it is not immune to the risks that other AI applications have in terms of ethics.
Autonomous agents seem to be the next big step for AI, with use cases expected to become widespread beginning in 2025.
Source: Counterpoint Research
Artificial Intelligence (AI) is here and it is up to companies to adopt it or get left behind. This is what ‘The AI Summit’, held in New York on December 11-12, conveyed in clear terms, backed by a wide range of experts. Government leaders, corporate strategists, tech developers and game designers came together at the summit to reveal where AI stands now and where it is heading next.
“Scaling” was the buzzword as many speakers highlighted that since companies now had the technology, the next step could be to integrate it into their business models in the most efficient way possible. The summit’s overall message was that AI is no longer just a ‘hype’ in the tech world that we need to wait for; it is here and already reshaping public services, corporate decision-making, and entertainment. At the same time, the summit discussed the question of how companies and government should eliminate the negative perception of AI, including concerns over privacy and transparency, and gain the customer’s trust to fully roll out AI initiatives.
Public AI
One example of AI being used for the greater good of the people came from New York City CTO Matthew Fraser, who showed how the government aims to weave AI into the city’s everyday life. Instead of waiting for tech companies to set the agenda, the city is defining its own goals, like helping residents become aware and gain access to the local services they qualify for, and using AI to reach them more efficiently. Of course, in relation to any conversation around AI, there is the issue of perception as the public does not trust it or have knowledge around the use of AI. Some solutions to reshape this perception that Fraser mentioned included the introduction of open town halls, transparent data policies, and learning from early chatbot mistakes. It is a reminder that AI can empower communities only if people understand it, feel comfortable with it, and know their interests are being protected.
New York City CTO Matt Fraser (Right) discussing AI integration with policy.
Source: Counterpoint Research
Enterprise AI
On the corporate side, companies like Johnson & Johnson and Pfizer stressed the importance of good governance. They have already invested heavily in “responsible AI” frameworks – training their employees, monitoring model performance, and keeping a close eye on data privacy and bias issues. Generative AI (GenAI) can be incredibly useful, but it is also capable of spitting out nonsense or grabbing information from questionable sources. Businesses know they need guardrails in place, both to protect their reputations and to ensure customers keep trusting their brands. As they have been introducing AI into their business models, they have already noticed better and cheaper efficiencies in their processes, which has helped eliminate stress and improve consistency in products.
AI in gaming
Gaming has always been a fun and creative way to test AI capabilities, but it also can highlight the issues AI innately poses for society. Some developers are excited about using GenAI to create richer, more adaptive worlds that can respond more dynamically and personally to players’ actions to give a more unique gaming experience. There is an innate fear that the market could end up being flooded with generic games. Independent (indie) game developers have been known to push the envelope and drive new creative gaming content. But AI poses the risk of larger companies spitting out more and more games at a rate these indie developers cannot feasibly keep up with due to time and resource constraints, especially financial. Further, AI-generated content could bring hidden biases or manipulative tactics into play, an issue that has already been brought up with some LLMs not being properly monitored or trained. Balancing innovation with ethics and originality will be a big challenge for the gaming industry.
Conclusion
The general consensus at the summit was that AI will soon feel just as normal as using a smartphone. Multimodal models that handle text, image, video and audio at once are coming fast, as seen in the launch of Gemini 2.0 from Google. Autonomous agents — programs that make decisions, solve problems and learn on the fly — are also around the corner, and it is likely that 2025 will be ‘the year of the agent’. The development and integration of AI have happened very rapidly and now is the time for industries to scale AI to their needs and bring measures to safely and efficiently use AI.
Related Research
Dec 16, 2024
Dec 12, 2024
Dec 12, 2024