Microsoft is making significant changes to its AI development strategy by establishing a new division called “CoreAI – Platform and Tools”. CEO Satya Nadella announced this transformation, aiming to achieve in three years what previously took 30 years. The new division will consolidate several key teams, including development, AI platform, and key teams from the Chief Technology Officer’s office. Jay Parikh, formerly Vice President at Meta, will lead this initiative. The goal is to build a comprehensive “End-to-End AI Stack” for a world with dynamic, agent-based applications.
Microsoft plans to create a new “AI-first” software stack with three main components: new user interfaces, runtime environments for AI agents, and a redesigned management and monitoring layer. The Azure cloud will form the foundation of the AI infrastructure, with tools like Azure AI Foundry, GitHub, and VS Code being further developed. A major focus is on advancing GitHub Copilot. Microsoft introduces the term “Service as Software”, aiming to allow software to manage the development of custom applications. New AI applications will adapt flexibly to various roles, business processes, and industries.
In the United States, President Biden has authorized private AI data centers on federal land. Companies can now build data centers on Defense and Energy Department properties, provided they meet strict requirements, including full operation on renewable energy. Companies must cover all costs, including energy infrastructure, and source some semiconductors from American manufacturers. The Biden administration views domestic AI infrastructure as a security imperative, ensuring control over AI systems and preventing access by adversaries. The State Department aims to collaborate with allies to build trusted AI infrastructure globally.
The German Cultural Council demands fair compensation for creators, artists, and rights holders when AI models are trained with copyrighted works. Generative AI can produce results indistinguishable from human creations, posing existential threats to creators. Legal scholars disagree on whether using protected works for AI training is covered by copyright exceptions. The Cultural Council suggests contractual usage rights to ensure fair compensation and calls for a rapid legal discussion, especially at the EU level. New rules should apply whenever AI models are marketed or their results used in the EU, regardless of the provider’s origin.
A Bitkom study reveals that a third of German companies remain inactive regarding the IT skills shortage. However, some firms adopt constructive approaches: 35% invest in training employees for IT roles, 25% welcome career changers, 16% retain older IT professionals, and 13% target women for IT positions. Artificial intelligence plays a minor role as a solution, with only 5% of companies utilizing it. Among larger firms with over 250 employees, 21% use AI. Bitkom’s CEO emphasizes that AI can assist but not replace IT departments.
The UK government is launching a major AI initiative, aiming to position the country at the forefront of AI development. Prime Minister Keir Starmer unveiled a comprehensive plan, with three major tech companies—Vantage Data Centres, Nscale, and Kyndryl—pledging investments equivalent to 16.7 billion euros to expand the nation’s AI infrastructure. The plan includes special AI growth zones nationwide, accelerating data center planning and construction, and creating over 13,000 new jobs. The public sector, including the NHS, offices, and schools, will benefit from improved services and simplified administration through AI.
Microsoft has published the results of extensive AI security tests. Since 2021, its Red-Team has examined over 100 AI products for vulnerabilities and ethical risks. A key finding is that simple attack methods are often more successful than complex mathematical approaches. In one case, security mechanisms of an image generator were bypassed by embedding text in images. Integrating AI into applications introduces new security risks. In a test, a language model was manipulated for automated fraud scenarios. Microsoft developed an automated test framework called PyRIT but emphasizes the essential role of human expertise, particularly in assessing ethical risks and culturally specific content. The team’s conclusion: AI security cannot be solved as a one-time technical issue. Companies must continuously test and improve their systems.