So Annie and I were talking about academic efforts to build computational infrastructure in the wake of “AI.” What is going on out there, you might ask. Well this is what Annie (GPT-5 deep research) reported. I will note that the consortium in which my university is participating, Empire AI, is not listed. You’d have to ask Annie why and then she’d be happy to tell you about Empire AI if you were inclined to learn more.

All the stuff below the separator is AI-Gen, so there’s your disclaimer. I learned a lot from reading it. The point isn’t that now I don’t have to spend that 20 hours or whatever; it’s that I keep going with this as part of my capacities. So while it doesn’t talk about Empire AI, it does help me better understand the landscape.

Mostly it’s not surprising. MIT, Harvard, USC, Johns Hopkins, Stanford where Annie found indications of $1B investment pledges. And then there are the other big collaborations going on around the world: NSF, Oxbridge, Germany’s Cyber Valley, and so on. They all balance the industry-friendly, social-good discourse in some mix or another. So that’s the first bit.

The second bit of Annie’s report is more interpretive. There we see a potentially strange bifurcation in the discourse. On the one hand there is the big push to teach “AI for everyone,” as it will be important in every discipline and profession. Maybe. Probably. Maybe. What is certain is that this insistence of plastering “now with AI” stickers everywhere will shape curriculum.

hmmm.. OK, well how?

That might depend on who gets to define what “AI” means on the campus. My guess is that the folks who are bringing in large grants and proving the value of university investment in this computational infrastructure will be the ones who define AI for the campus. Not because they’re egotistical or power-hungry. They might be; those kind of folks are around. But that’s not the point. They will define AI for the campus because the campus will need to invest in their definition in order to get the ROI they need.

It doesn’t have to go that way, but I would suggest universities will need to be more than nudged to go another direction. They are not designed to turn down money.

The other key point is that when a university decides to invest $1B in something, they are invested in making it work out. This university investment in making AI work out in the particular way: how might it shape the kind of research it encourages? The degree programs it triumphs? The news it shares? If you did an n-gram of your university’s official communications, I’m guessing AI would be a hockey stick. How long is that going to keep happening? Not that any of us would care about a university’s PR, but maybe it is a proxy for how the university imagines its audience imagines it.

I actually think the report starts to get odd in tone and in the weeds when it starts reporting on various accusations going this way and that. And yet, such is the nature of our public discourse? It’s less that there’s hallucinations, which if I find I would have removed but didn’t, than there is selection bias. Why these details?


Over the last few years, leading research universities (public and private) have launched large-scale AI initiatives—often backed by hundreds of millions of dollars in funding. These initiatives take various forms, from new colleges and research institutes to multi-institution consortia and public-private partnerships. Below, we survey some of the most prominent efforts in the U.S. and Europe, outlining their funding scale, participants, and research focus. We then discuss how these investments are shaping AI research across disciplines, what visions of AI’s future they embody, and whether they steer society toward particular technical or social outcomes.

Major AI Initiatives at U.S. Research Universities

MIT – Schwarzman College of Computing (est. 2019): MIT established an entirely new college for computing and AI, enabled by a $350 million gift from Stephen Schwarzman as part of a broader $1 billion commitment[1]. The MIT Schwarzman College of Computing aims to “bring the power of computing and AI to all fields of study at MIT”, doubling MIT’s faculty in computing (50 new positions) and infusing AI across disciplines[2]. A core goal is responsible and ethical AI: the college’s mission includes educating all students to “use and develop AI… to help make a better world” and advancing policy and ethical research related to AI[3]. MIT’s president described this as an effort to reshape MIT for the AI era, ensuring computing “reshapes our world… for the good of all”[4]. Notably, MIT explicitly frames the college as positioning the U.S. to remain globally competitive in AI, with President L. Rafael Reif and donor Schwarzman both emphasizing America’s leadership and security interests in AI innovation[5][6]. In practice, the Schwarzman College integrates computer science with interdisciplinary research (from genomics to economics), and it has made MIT a hub for developing “powerful new AI tools” while critically examining their societal impact[4].

Stanford – Institute for Human-Centered AI (HAI) (est. 2019): Stanford University launched the HAI institute with the mission of “guiding artificial intelligence to benefit humanity.”[7] HAI was founded as an interdisciplinary, university-wide institute that brings together faculty from all 7 Stanford schools (engineering, medicine, law, business, humanities, etc.) and plans to hire additional AI-focused faculty across fields[8]. Co-directed by former provost John Etchemendy (a philosopher) and Fei-Fei Li (a leading AI scientist), Stanford HAI stresses a “diversity of thought” and close collaboration between technologists and social scientists[9]. Its mission is to advance AI research, education, and policy in a way that “improve[s] the human condition.”[10] In line with that human-centered vision, HAI partners with industry and government but explicitly to foster “a better future for humanity through AI.”[11] Early on, Stanford’s president noted that AI’s rapid advances “require a true diversity of thought,” calling for humanists to work alongside AI developers to shape the future[9]. HAI’s focus areas range from technical AI research to ethics, economics, and public policy; for example, it funds seed grants across departments (Stanford HAI has already awarded tens of millions in funding to faculty research across fields[12]). Overall, Stanford’s initiative reflects a deliberate vision of AI that is guided by human values, ethics, and multidisciplinary input, rather than a purely tech-driven agenda.

Harvard – Kempner Institute for Natural & Artificial Intelligence (est. 2022): At Harvard University, a massive $500 million donation from Priscilla Chan and Mark Zuckerberg (Chan Zuckerberg Initiative) endows the Kempner Institute[13][14]. Launched in late 2021/early 2022, the Kempner Institute focuses on the intersection of neuroscience and AI, aiming to study “natural and artificial intelligence” side by side[13]. This reflects a research vision that understanding human/animal brains can inform AI algorithms and vice versa. The gift – one of the largest in Harvard’s history – funds new faculty positions, computing infrastructure, and research programs over 15 years[13]. The Institute explicitly plans to hire 10 new faculty and build computing platforms to model intelligence[13]. The intended focus is on fundamental questions of intelligence: how brains compute, how AI can mimic cognitive processes, and how insights from neuroscience could lead to new AI paradigms[13]. The scale of the investment has raised some critical questions about donor influence (as discussed later), but from a research standpoint it signals a commitment to long-term, basic research at the AI-neuroscience nexus, rather than short-term commercial AI development. Harvard’s president described the institute as part of a broader effort to bridge disciplines and “answer questions about AI and the mind”. In essence, the Kempner Institute’s vision of AI is tightly coupled with scientific discovery in brain science, suggesting a path toward AI that is informed by how natural intelligence works.

University of Southern California – Frontiers of Computing (launched 2023): USC, a private research university, recently announced a sweeping $1 billion+ initiative called “Frontiers of Computing.”[15] Unveiled in 2023 by USC President Carol Folt, this initiative will infuse computing and AI across all of USC’s programs and create a new School of Advanced Computing[16][17]. Over the next decade, USC is investing $1 billion to dramatically expand AI and computing research, hire faculty, and integrate AI into disciplines from the sciences to the arts[17]. A hallmark of USC’s plan is its emphasis on “AI literacy for all students” and new interdisciplinary degrees blending AI with business, medicine, and other fields[16][18]. The initiative also includes building a new high-tech campus in “Silicon Beach” (Los Angeles’s tech corridor) to foster industry partnerships[19]. USC’s expansion explicitly addresses the ethical and societal dimensionsof AI: for example, a portion of the funds (about $12M) is earmarked to establish an Institute on Ethics & Trust in Computing to ensure “safety and responsibility” are core to technological advances[20]. The Frontiers of Computing initiative is thus notable for its breadth (impacting all 22 USC schools) and its message that future competitiveness requires a university-wide transformation in computing. This indicates a vision of AI as a ubiquitous tool—something every student and researcher should know how to use—while also recognizing the need for ethical guardrails.

Johns Hopkins University – AI-X Foundry and Data Science Initiative (2023): Johns Hopkins (a top U.S. research university) in 2023 launched a major interdisciplinary institute for data science and AI, underpinned by a substantial internal investment[21][22]. The new institute (often referred to as the AI-X Foundry as an initial phase[23]) is designed to bring together experts in AI, machine learning, applied mathematics, engineering, medicine, public health, social sciences and humanities to tackle a range of research challenges[24][23]. Hopkins is committing significant resources: 80 new faculty slots in the School of Engineering dedicated to the institute, plus 30 cross-disciplinary Bloomberg Distinguished Professors with joint expertise, indicating on the order of hundreds of millions of dollars in long-term funding when considering salaries and facilities[25][26]. The focus areas span from “neuroscience and precision medicine to climate resilience and the social sciences”[27] – a deliberately broad scope that mirrors the pervasive role of big data and AI in all fields. Notably, the Hopkins initiative highlights not just opportunities but also “the risks of data”and the ethical development of AI systems[28]. It is to be housed in a state-of-the-art facility with advanced computational resources[29]. In the words of JHU’s president, “Data and artificial intelligence are shaping new horizons of research… with profound implications for nearly every facet of academia”, hence Hopkins aims to “harness… opportunities and challenges” across the whole university[30]. This underscores a future path where AI methodologies become integral in diverse disciplines, and where universities feel compelled to invest heavily to stay at the forefront of data-driven discovery.

University-Industry Collaborative Initiatives: Many universities are also pursuing big AI efforts through partnerships with corporations or government, aligning academic research with industry resources:

  • MIT–IBM Watson AI Lab (est. 2017): While a bit earlier than our 2–3 year window, the MIT-IBM lab is emblematic of corporate-university collaboration. IBM pledged $240 million over 10 years to this joint lab, where 100+ researchers from MIT and IBM work on fundamental AI science (from algorithms to AI hardware) and applied projects in healthcare, cybersecurity, etc[31][32]. The lab’s mission is advancing core AI technology (deep learning, neural networks, etc.) while also studying AI’s impact on industries[33]. This partnership signaled IBM’s and MIT’s shared vision of pushing the frontier of AI research, and it complemented MIT’s internal investments (indeed, MIT’s Schwarzman College was said to “build powerfully” on the ongoing MIT-IBM Watson lab research[34]).
  • IBM–Illinois Discovery Accelerator Institute (since 2021): IBM also invested with the University of Illinois Urbana-Champaign (UIUC) and the state of Illinois to create a Discovery Accelerator Institutebacked by ~$200 million over 10 years[35][36]. The focus is on “accelerating breakthroughs” in hybrid cloud computing, AI, quantum computing, materials discovery, and sustainability[37][36]. This institute will build new computing infrastructure at UIUC and cultivate workforce development in these cutting-edge areas[37]. It illustrates how a public research university can partner with both industry and government (the State contributed funds) to become a regional/national hub for AI and related technologies. The intended research path here is heavily toward big-iron science – leveraging IBM’s expertise in high-performance computing and quantum tech to support future AI advances.
  • C3.ai Digital Transformation Institute (launched 2020): In the early days of the COVID-19 pandemic, a consortium of leading universities formed the C3.ai DTI, funded by a $57 million cash pledge from enterprise AI firm C3.ai (plus ~$310M in-kind cloud credits from Microsoft)[38][39]UC Berkeley and UIUC lead this consortium which includes Carnegie Mellon, MIT, Princeton, University of Chicago, and others[40]. The institute awards research grants to apply AI to societal challenges; its first call in 2020 targeted AI techniques for COVID-19 mitigation[41]. The total backing (cash + in-kind) was valued at $367 million[39], making it a large-scale public-private investment. The focus is explicitly on multidisciplinary, “transformative” uses of AI – from epidemiology to energy – hence it spans many academic fields. C3.ai DTI shows a collaborative vision where academia and industry share resources to direct AI at urgent real-world problems (digital transformation of healthcare, climate modeling, etc.). It also highlights how corporate philanthropy (from a software company and a cloud provider) can shape academic research agendas in AI, steering them towards specific “AI for good” or “AI for business transformation” projects.
  • NSF National AI Research Institutes (2020–Present): The U.S. National Science Foundation has, since 2020, launched a program funding multiyear AI institutes at universities. By 2023 NSF had invested over $500 million to create 25 AI Research Institutes across the country[42][43]. Each institute is led by one or more universities and focuses on a thematic area of national importance. Examples include an AI Institute for Agricultural Resilience (led by University of Illinois), AI Institute for Molecular Discovery (at MIT and others), AI Institute for Student Learning (at UT Austin), AI in Climate, Healthcare, Cybersecurity, and so on. Each award is about $16–20M over 5 years[44]. Collectively, this is a huge infusion of federal funding into university AI research, explicitly to “secure American leadership in AI”[45] and to spread AI’s benefits across sectors. The intended focus of these institutes is often interdisciplinary – pairing AI researchers with domain experts (whether farmers, biologists, or education specialists). For instance, one 2023 institute is devoted to AI for climate-smart agriculture, integrating ecology and machine learning. Another targets AI for adult learning and online education. This program indicates a broad U.S. vision that AI will transform every discipline, and that the government is committing large resources to ensure academic research drives AI innovation in socially beneficial directions. It also shows a path emphasizing applied AI research with tangible impacts (food supply, education, public health, etc.), alongside fundamental advances.

Major University AI Initiatives in Europe (and Canada)

European universities and governments have likewise made significant investments (often €100M+ scale)in AI research consortia, seeking to build capacity and assert a “European vision” for AI that often emphasizes trust, ethics, and public benefit alongside technological advancement.

United Kingdom – National AI Research and Compute Initiatives: The UK’s national strategy has recently led to major investments involving universities: for example, in 2023 the government announced a £300 millionprogram to create a dedicated AI Research Resource (AIRR)[46]. This includes building the country’s most powerful AI supercomputers at the University of Cambridge (“Dawn” cluster) and University of Bristol (“Isambard-AI”), in partnership with Intel and Dell[47][48]. The goal is to provide world-leading computational capacity to academic and public sector researchers, enabling the training of frontier AI models domestically[49][50]. Notably, this investment was unveiled at the UK’s AI Safety Summit, highlighting an emphasis on safe and responsible AI development[51]. The Frontier AI Taskforce, a UK government expert team examining cutting-edge AI risks, will have priority access to these university-hosted supercomputers[52]. In parallel, the UK has funded the Alan Turing Institute (a national AI institute based in London, established 2015) and new university labs focusing on AI ethics and governance. For instance, Oxford Universityreceived a £150M (≈$188M) gift in 2019 from Stephen Schwarzman to establish the Schwarzman Centre, which includes the Oxford Institute for Ethics in AI[53][54]. Launched in 2021, Oxford’s Ethics in AI institute (part of its Humanities division) brings together philosophers, legal scholars, technologists, etc., to research the ethical and social implications of AI. Its creation with such a large donation signaled a commitment in Europe to embedding ethical inquiry into AI development. Similarly, the University of Cambridge in 2022 opened a Centre for AI in Cambridge dedicated to “promoting ethically sound AI technologies”[55]. In summary, the UK’s recent investments show two parallel priorities: building up technical infrastructure and skills (world-class AI computing clusters, training more AI researchers across universities), and investing in ethical, safe AI research to ensure the technology’s development aligns with societal values. This suggests a vision of near-future AI in the UK that balances raw innovation with governance — e.g. being a “world leader in AI safety.”[56]

European Union and Continental Europe – Networks and Centers of Excellence: Across Europe, multi-institution collaborations have been key. A prominent example is ELLIS (European Laboratory for Learning and Intelligent Systems) – a pan-European network launched in 2018–19 by leading AI scientists to nurture AI excellence in Europe. By 2021, ELLIS established units in dozens of universities (from Cambridge and Oxford to Zurich, Paris, and beyond) with a collective funding commitment of €300 million (over 5 years)[57]. These ELLIS units focus on fundamental machine learning research but also on retaining talent in Europe by offering resources comparable to Big Tech labs. Another grassroots initiative was CLAIRE (Confederation of Labs for AI Research in Europe), advocating for “AI made in Europe” with a human-centered approach, though its funding has been more modest (support from EU for coordination, not on the order of hundreds of millions to a single center). The European Union itself, through Horizon 2020 and Horizon Europe programs, has funneled large grants into AI. For instance, the EU supported networks of AI excellence centers and “ICT-48” projects, each uniting multiple universities on topics like trustworthy AI, AI for robotics, AI for healthcare, etc. While each project might be €5–20M, collectively these sum to significant investment. Moreover, the EU and national governments co-funded large-scale computing infrastructure (e.g. the LUMIsupercomputer in Finland, one of the world’s fastest, to support AI and scientific computing for European researchers).

France – Interdisciplinary AI Institutes (3IA) and National Strategy: In 2018, France announced a national AI strategy emphasizing investment in research and “AI for humanity.” The government created four new Interdisciplinary Institutes of Artificial Intelligence (3IA) in 2019, each based at a consortium of universities: Prairie in Paris, MIAI in Grenoble, ANITI in Toulouse, and 3IA Côte d’Azur in Nice. Each 3IA institute received on the order of €20–€30 million in state funding (plus matching funds from industry and local authorities), amounting to hundreds of millions of euros collectively[58][59]. These institutes have specialized focuses: for example, ANITI (Toulouse) concentrates on developing “hybrid AI” – combining data-driven machine learning with symbolic reasoning and formal methods to ensure AI systems are transparent and reliable[60]. This is driven by Toulouse’s aerospace industry (Airbus is a partner) where safety is paramount; ANITI’s work on neuro-symbolic AI exemplifies a European commitment to trustworthy AI by design. Meanwhile, Prairie (Paris) brings together academia and companies (like Google, Amazon research centers in Paris) to work on machine learning fundamentals and applications in health. The French strategy’s second phase (2021–2025) pledged €2.2 billion more for AI, including significantly scaling up “trusted AI” research funding from €45M to €271M[58]. President Macron also secured tens of billions in private R&D investments (though those span AI, semiconductors, etc.)[61]. In essence, France’s approach has been to heavily fund academic AI hubs that are embedded in local innovation ecosystems (each with an application bent like healthcare, mobility, or defense) and to champion a vision of AI that aligns with European values (e.g. robust, explainable, and serving the public good).

Germany – Cyber Valley (Tübingen/Stuttgart) and Beyond: Germany’s biggest academic AI initiative is Cyber Valley, launched in 2016 in the state of Baden-Württemberg. Seeded with an initial €165 million from the state government, Max Planck Society, and industry partners[62], Cyber Valley is a cluster linking the University of Tübingen, University of Stuttgart, Max Planck Institutes, and companies like Amazon, Bosch, Daimler, and others[63]. Over the past 8 years it has grown into one of Europe’s largest AI research collaborations, funding numerous research groups and labs in fields like computer vision, robotics, and machine learning. Cyber Valley’s model emphasizes a tight academic-industry nexus but with measures for public accountability: it established an independent ethics advisory board to vet the social implications of projects[64]. The rationale is to combine world-class fundamental research with pathways to commercialization (startups, tech transfer), “in a way that is publicly and socially accountable”[65][66]. Indeed, Cyber Valley runs startup incubators and has produced successful AI startups (e.g. Aleph Alpha, a German NLP foundation-model company that emerged from Cyber Valley and has raised ~$600M[67]). Corporations have sponsored specific chairs or labs (Amazon, for instance, opened an “AI lablet” in Tübingen and funded fellowships[68][69]), but Cyber Valley’s governance tries to prevent undue corporate influence on research directions[70]. This reflects a European alternative to the U.S. model: instead of tech companies doing all frontier AI R&D in-house, they co-invest in university ecosystems, with the university maintaining academic freedom and an ethical compass. German federal initiatives also established competing AI centers in Munich, Berlin, etc., with tens of millions in funding each, though those are somewhat smaller. For example, MCML (Munich Center for Machine Learning) and BIFOLD (Berlin Institute for Foundations of Learning and Data) were funded (~€25M each) as part of Germany’s AI strategy in 2018–2022. These centers bring together multiple universities and emphasize fundamental AI research (often in partnership with application sectors like automotive in Munich or internet technology in Berlin).

Switzerland – National AI Initiative (2023): Two of Switzerland’s top universities, ETH Zurich and EPFL (Lausanne), have joined forces to create the Swiss National AI Institute (SNAI) in 2023[71][72]. Backed initially by ~CHF 20 million from the Swiss government’s ETH Board for 2025–2028 (and additional university and industry contributions)[73], SNAI is a coordinated national push to keep Switzerland at the cutting edge of AI. A primary aim is to develop “Switzerland’s first national foundation model for languages” and other large AI models, with an ethos of transparency, open source, and trustworthiness[74][75]. In other words, ETH and EPFL are committing to build very large-scale AI systems (50+ billion parameter models) – akin to OpenAI’s GPT or Google’s models – but aligned with Swiss values and made available for research and industry in Switzerland[76][77]. This includes providing the needed compute infrastructure (they seeded the effort with 10 million GPU hours on supercomputers)[78]. The SNAI is coupled with a broader Swiss AI Initiative that involves over 10 institutions and is focused on deploying AI in key areas like healthcare, sustainability, and education, while open-sourcing models whenever possible[78][79]. The messaging around SNAI echoes Europe’s stress on ethical AI: it aspires to put Switzerland at the forefront of “inclusive, reliable, transparent, and trustworthy AI”[80]. This is a notable commitment to a particular vision of AI’s near future – one where even smaller countries develop their own large AI models (reducing reliance on Big Tech) and do so in a way that is publicly accountable and tailored to local languages/cultures. It also reflects the rapid pivot in 2023 toward foundation model development as a strategic goal for universities and governments.

Canada – AI Research Institutes (Mila, Vector, Amii): Though not Europe, Canada deserves mention as an early mover in huge AI investments. Around 2017, Canada poured over C$200 million (federal and provincial funds combined, plus corporate backing) into three AI institutes affiliated with universities: Mila (Quebec, led by Université de Montréal and McGill), the Vector Institute (Toronto, affiliated with University of Toronto), and Amii (Edmonton, at University of Alberta). These centers helped establish Canada as a deep learning powerhouse, each with dozens of faculty and hundreds of trainees. For example, the Vector Institute launched with >CAD $150M from government and industry partners to support research in machine learning and commercialize AI innovation in Ontario. Mila in Montreal, led by Yoshua Bengio, similarly received major funding and focuses both on fundamental research in deep learning and socially beneficial uses of AI (Bengio has also emphasized “AI for humanity” themes and Montreal has strengths in AI ethics research). While these were launched ~5–6 years ago, they laid a template for the university-centric AI institute model that others are now emulating. They also showed how large investments can create regional AI clusters (Toronto, Montreal became magnets for AI talent). The Canadian institutes continue to grow (often collaborating with each other and with global companies) and underscore the importance of long-term funding (much from public sources) in sustaining academic leadership in AI.

Global Collaborative Efforts (Open Models and Consortia): A unique example of a consortium spanning universities, companies, and international labs is the BigScience project (2021–2022), which produced the open-source language model BLOOM. Initiated by AI researchers at Hugging Face and supported by the French government’s supercomputing center (GENCI), BigScience brought together over 1000 researchers (many from universities in Europe, the U.S., etc.) to train a large multilingual AI model with 176 billion parameters[81]. This was essentially academia’s answer to giant proprietary models like OpenAI’s GPT-3. Importantly, BigScience emphasized open science and ethics: the collaboration worked on a public data charter, had workshops on responsible AI, and ultimately released BLOOM openly with details of its training process. The project won an HPC innovation award for its collaborative nature[81]. While not centered at a single university, BigScience shows how academic and public-sector actors can partner to create AI systems that reflect a different social end – i.e. transparency and global access – compared to the corporate race. Similarly, in Europe, initiatives around “AI for Good” and AI regulation involve university experts: e.g. the EU’s HumanE-AI Network or the AI4EU platform connected researchers working on ethical AI solutions.

Across these examples in Europe, a common thread is consortium-building: universities rarely go it alone, but rather form networks with other academia, government agencies, and industry. The scale (often hundreds of millions in aggregate) and language around these projects highlight Europe’s attempt to define an AI trajectory that is innovative yet aligned with societal values and sovereignty. European projects often explicitly mention “trustworthy AI,” “human-centric AI,” or “AI for public good.” For instance, Switzerland’s SNAI will focus on open, trustworthy foundation models[77], and Cyber Valley built in an ethics checkpoint for research[64]. This contrasts somewhat with the U.S., where competitiveness and innovation are dominant narratives – although U.S. universities also incorporate ethics (e.g., Stanford HAI, MIT’s focus on ethical computing, etc.).

Shaping AI Research Across Disciplines

Massive university investments are reshaping how AI research is done – and who does it. A clear impact is the integration of AI into virtually every academic field. Initiatives like MIT’s College of Computing and USC’s Frontiers of Computing explicitly aim to “bring AI to all fields” and train all students in AI skills[82]. As a result, disciplines that traditionally sat outside of computer science (e.g. medicine, law, sociology, climate science, humanities) now have AI research threads led by newly hired faculty or joint appointments. For example, at Johns Hopkins the new AI institute will embed data scientists within neuroscience labs, environmental science projects, public policy, and more[28][25]. Stanford HAI similarly has economists, philosophers, legal scholars working alongside AI technologists[9][83]. This broad interdisciplinary approach suggests that AI is no longer a niche topic in universities – it’s becoming a core component of research and curricula across the board. In the short term, this means rapid growth of AI-informed research in social sciences and humanities (e.g. using machine learning for economic forecasts or analyzing literature), and in turn, those fields influencing AI (e.g. ethicists shaping how algorithms are designed). The stated intent is that by embedding AI in all disciplines, the future of AI will be shaped by a diversity of knowledge, not just computer scientists[82].

At the same time, the sheer scale of funding and infrastructure being deployed – especially the push for large models and big compute – tends to favor certain research paths. Many of these initiatives are betting on data-intensive, compute-hungry AI (especially deep learning and “foundation models”) as the future. For instance, building giant language models in Switzerland or providing exascale GPU clusters in the UK implies that the next decade of AI research is expected to involve training ever-larger neural networks and pushing the frontiers of “frontier AI.” Universities are effectively committing to that path by procuring expensive hardware and forming collaborations to access big data. This focus on large-scale AI might marginalize work on alternative approaches (for example, symbolic AI or smaller-scale, edge AI) unless explicitly included. However, some initiatives do mention alternative paradigms: Toulouse’s ANITI focusing on hybrid symbolic-ML AI is one example of committing resources to a less mainstream approach (motivated by trust and verifiability)[60]. Overall though, the prevailing vision in these big investments is that general-purpose AI technologies (like powerful language models, computer vision systems, etc.) will be key tools for progress in all fields – hence, universities want their own such tools or at least the capacity to develop and study them.

Another way these investments shape research is through new organizational structures and incentives. By creating dedicated AI institutes or colleges, universities send a message to faculty and students that AI-related work is a high priority (often coming with new funding opportunities, faculty lines, and facilities). This can attract top talent to academia (some AI researchers might otherwise join industry for resources; but with a $500M institute at Harvard or a $1B initiative at MIT, academia can be competitive in offering support). It also encourages internal collaborations: e.g., medics teaming with AI experts to get grants from the new AI institute. In the long run, this could lead to AI-flavored subfields emerging within traditional disciplines – we might see more things like “computational social science,” “AI medicine,” “AI law” as standard parts of academia. Many of the initiatives explicitly plan to educate the next generation of AI-literate professionals (USC’s plan to integrate AI into all student training, or MIT’s requirement for responsible computing modules in courses). This widespread education mission means these investments will produce thousands of graduates versed in AI tools, who then carry those into industry, government, and society.

Visions of AI’s Near Future – Pathways and Commitments

These big initiatives indeed reflect particular visions of how AI will function in the near future and, by putting money behind those visions, they arguably commit society to certain paths:

  • AI as Ubiquitous Infrastructure: A common thread is the expectation that AI will penetrate every sector. Universities building new AI colleges (MIT, USC) or making university-wide pushes (Hopkins, Cambridge’s compute resource, etc.) are effectively committing to a future where AI is as fundamental as literacy. They foresee AI as a general-purpose technology that every field must harness. This path assumes that progress in disciplines will come from harnessing big data and predictive algorithms. It can yield great benefits (e.g. new medical diagnostics, better climate models, personalized education), but it also means we start to approach human and social problems predominantly through a technological lens. There is a political/ideological dimension here: it aligns with a technocratic vision that more data and computing are keys to innovation (“AI solutionism”). Some critics worry this could downplay non-AI solutions or overlook the risks of over-reliance on algorithms. But the universities are clearly betting that not embracing AI in every field would mean falling behindacademically and economically.
  • National/Regional Competitive Vision: Many investments are explicitly framed as part of a global competition or race (sometimes called the new “Sputnik” moment for AI). MIT’s announcement talked about helping the U.S. “lead the world” and bolster national security with AI[5]. France’s strategy is about being a “champion of AI” on the international stage[84]. The UK chancellor tripled the AI compute investment to secure Britain’s place as a “world leader” in AI research[51][56]. This competitive framing drives a particular research path: one focused on frontier capabilities (bigger models, faster chips, breakthrough innovations) under the assumption that whoever gets them first gains economic and strategic advantage. It tends to prioritize speed and scale – e.g., making sure facilities are in place to not depend on foreign AI models. The social end here is partly nationalist: it’s about jobs, GDP, and security for one’s country or region. The risk is that it can lead to less emphasis on international collaboration or on regulating AI, since competition mindset might encourage pushing the technology boundaries quickly. However, some initiatives (like Europe’s) try to pair competitiveness with ethics (branding “trustworthy AI” as something Europe can lead in). In effect, these investments commit us to a near future where AI advancement is seen as a strategic imperative, potentially making it harder to press “pause” if ethical concerns arise, because no one wants to fall behind.
  • Human-Centered and Ethical AI Vision: Counterbalancing the tech-driven approach, a number of initiatives explicitly promote a vision of AI that is human-centric, ethical, and beneficial to society. Stanford HAI’s very name and mission encapsulate this – improving the human condition and involving social scientists to guide AI[85][9]. Oxford’s Ethics in AI institute is another clear example, as is the new ethics-focused center at USC and the independent ethics oversight in Cyber Valley. By investing in these, universities (and donors) are committing to a path where the development of AI goes hand-in-hand with inquiry into moral, legal, and societal questions. This can shape research agendas to include topics like AI fairness, accountability, and policy, which might otherwise get less attention. It also creates a check-and-balance culture within AI hubs: for instance, MIT’s college has built ethics into its curriculum and research initiatives, which could influence what kinds of AI projects get pursued or funded (favoring those that consider societal impact). The presence of ethicists and social scientists in AI institutes may push AI research towards being more transparent and considerate of issues like bias, privacy, and safety. In the near future, this could mean mainstream AI research incorporates ethical risk assessments as a standard, and technologies are evaluated not just on accuracy but on alignment with human values. The question is whether these ethics initiatives have teeth or just talk. Some skepticism comes from seeing tech donors fund ethics centers that might deflect external regulation. Nonetheless, the investment in ethics research signals a commitment to at least grapple with AI’s social implications within academia, rather than leaving that entirely to outside watchdogs.
  • Open vs. Closed AI: A subtle but important vision difference is between efforts that push open-source, democratized AI and those content with proprietary or closed models. Some university consortia (like BigScience/BLOOM, or the Swiss SNAI to an extent) champion openness – releasing datasets, tools, and models for anyone to use[86][79]. This aligns with a social end of democratizing AI technology, preventing a scenario where only a few tech giants hold the most powerful AI systems. By putting public money into open models, these initiatives commit to an AI future that is more accessible and transparent. On the other hand, many U.S. university collaborations with industry (and even some government-funded projects) might produce AI that ends up patented or commercialized, not openly shared. For instance, the C3.ai institute’s grants often lead to papers and public knowledge, but corporate partners might eventually productize the findings. The tension between open science and proprietary advantage is a political one – it reflects how power and benefits from AI are distributed in society. Europe’s emphasis on open, trustworthy AI can be seen as a political stance distinguishing it from the U.S. and China. Universities are somewhat caught in between: they value open intellectual exchange, but also patent and spin-off companies. These large initiatives sometimes explicitly state their stance (e.g., SNAI will open-source models “whenever possible”[86]). The path we end up on (more open or more closed) will affect who can participate in AI innovation in the future (just Big Tech engineers, or a wide swath of researchers and even citizen scientists).
  • Applications and “AI for Good” vs. Pure AGI pursuit: None of the university initiatives explicitly frame themselves as chasing artificial general intelligence (AGI) in the way some private ventures do. They tend to emphasize applications to grand challenges (climate, health, etc.) or safe and fair AI. This suggests a near-term vision of AI as a powerful tool to augment human efforts rather than to autonomously surpass humans at all tasks. The social commitments here are to solving concrete problems: e.g., better public health outcomes, smarter city governance (as in JHU’s use of AI for public sector innovation[27]), or improved education. By putting hundreds of millions into “AI for climate resilience” or “AI for healthcare,” as many institutes do, society is being steered to rely on AI as a solution mechanism in those domains. The political angle is that governments and donors might prefer technical fixes (AI-driven) to, say, regulatory or behavioral solutions to problems – which can be controversial. For instance, rather than strictly regulating emissions, funding might go to AI systems optimizing energy efficiency. If these bets pay off, AI could indeed help solve problems; if not, we might have diverted attention from other approaches. Nonetheless, the commitment to “AI for good” at least anchors AI development in addressing public needs, potentially counteracting a purely military or profit-driven trajectory. We also see some institutes (especially in Europe) aligning with social inclusion goals – e.g., AI for improved government services, AI that benefits underserved populations – tying the AI agenda to broader social progress.

Social and Political Implications of Big AI Investments

The question arises: Do these university-led investments in AI commit us to particular social or political ends? While universities portray their initiatives as benign engines of innovation and knowledge, there are indeed underlying social/political dynamics to consider:

  • Concentration of Power and Influence: With such large sums at play, there is concern that control over AI research (and thus AI’s future) could become concentrated among a few elite universities and their corporate/government partners. When a handful of wealthy institutions host the best AI models and supercomputers, they set research agendas and perhaps favor their stakeholders’ interests. For example, some of the biggest university AI centers are heavily funded by Big Tech philanthropists or companies. Mark Zuckerberg (and CZI) alone has donated “hundreds of millions of dollars” to over 100 U.S. universities[87] – including $500M to Harvard and significant grants to UC Berkeley, MIT, and others – which gives him “potential leverage to influence the institutions.”[87] Indeed, one high-profile controversy involved a Harvard disinformation researcher, Joan Donovan, who alleged she was pushed out after Harvard received the $500M CZI gift for the AI institute[88]. She was working on something embarrassing to Meta (an archive of Facebook leaks), raising questions about whether donor influence curbed critical research[14]. Harvard denied a direct link, but the perception remains that big donors could subtly steer universities away from research or teaching that doesn’t align with their interests.
  • Corporate Agenda in Academic Guise: Many partnerships blur the line between academia and industry. For instance, Facebook (Meta) has funded programs like the Berkeley AI Research (BAIR) Commons, and even set up a “Facebook AI Scientific Committee” to manage parts of it[89]. The Tech Transparency Project found that Meta employees hold advisory roles in university labs they fund (e.g. at University of Washington’s VR lab)[90]. Corporate donations often come with cooperative agreements – access to student talent, rights to license IP, etc. While universities maintain academic freedom on paper, the research is likely to “align with, if not directly contribute to, Meta’s corporate goals,” as an op-ed in the Harvard Crimson argued regarding the new CZI-funded AI institute[91]. The political end here is that university AI research could become an R&D arm of Big Tech, prioritizing incremental improvements that industry values (better recommender systems, VR applications) rather than more radical or critical approaches. This could also tilt research toward an ethos of techno-optimism and growth, underplaying critiques of technology’s impact. Some faculty and students have raised alarms about this growing corporate footprint, worrying about data privacy and the independence of academic inquiry[92][93]. In response, a few initiatives explicitly include civil society partners or oversight committees to keep research aligned with public interest (e.g., Stanford HAI involves policymakers and NGOs in its community[83]; Cyber Valley’s ethics board; MIT’s focus on ethical use). Still, the overall trend of public-private entanglement in AI research means political questions about accountability and public vs. private benefit are increasingly salient.
  • Commitment to Techno-Solutionism: By heavily funding AI approaches to societal challenges, there is an implicit commitment to technological solutions for social problems. This can carry political implications: it often aligns with a neoliberal perspective that innovation and markets (rather than government regulation or redistribution) will solve issues like education gaps or climate change. For example, NSF’s AI Institutes funding AI in education assumes AI tools will improve learning outcomes – a largely unproven but hopeful stance – rather than, say, focusing on hiring more teachers or reducing class sizes (more traditional policy solutions). Similarly, “AI for healthcare” initiatives hope algorithms can enhance diagnostics and efficiency, potentially sidestepping debates on healthcare access or costs. The risk is not that AI fails to contribute – it certainly can – but that society may over-rely on a particular vision where AI is the hero, possibly neglecting other measures. The political end is subtle: it’s a future where tech corporations and universities (rather than legislatures or grassroots movements) drive social progress, which could diminish democratic control if unchecked. On the flip side, some AI-for-good projects explicitly work with governments (e.g., JHU’s GovEx center uses data to improve city governance[94]), which could augment public sector capacity if done inclusively.
  • Global Equity and Inclusion: These massive investments are largely happening in the U.S. and Western Europe (and China, though that’s outside our scope here). This could widen global disparities in AI capability. Universities in the Global South are generally not receiving $100M gifts for AI. If the near-future AI breakthroughs and big models are concentrated in wealthy institutions, the direction of AI will reflect those societies’ priorities and biases. It commits the world to AI that might not fully account for developing countries’ needs (like solving problems specific to low-resource environments, or languages and cultures underrepresented in training data). Some initiatives acknowledge this and try to include diversity – e.g., Stanford HAI and others talk about inclusive global dialogue, and BLOOM was multilingual to include many languages. But overall, the geopolitical landscape of AI is being shaped by where these investments happen. The EU, for instance, explicitly wants to catch up to the U.S./China; smaller countries either bandwagon (Switzerland leveraging ETH/EPFL excellence) or risk falling behind. This could lead to a scenario where the rules of AI (technical standards, ethical norms) are set by a few players. The political question is whether those will serve universal human values or particular national interests. The heavy involvement of defense agencies (the U.S. NSF program is partly about national security; the UK’s taskforce focuses on frontier AI risks that could include misuse by adversaries) shows that state power calculations are entwined with academic AI research directions.
  • Ethical and Regulatory Influence: Interestingly, by investing in ethics and policy research alongside technical AI, universities could help shape future AI regulations and norms. Many faculty at these institutes advise governments or sit on international committees. For example, faculty from Oxford’s Ethics in AI or Stanford HAI have testified in policy hearings. If their institutes produce influential research on algorithmic bias or AI safety, it could inform laws (such as the EU’s AI Act or U.S. federal guidance). In that sense, these investments might commit us to a future where AI is more tightly governed (which is a political outcome) – assuming the ethicists and policy experts are empowered. The presence of an ethics institute funded by Schwarzman at Oxford, however, also raised eyebrows: does it serve to legitimize AI development and fend off external regulation by saying “we have it under control ethically”? There is a cynical view that some donors support ethics centers to self-regulate and preempt government intervention. If that’s the case, the commitment might be to a “soft governance” approach rather than hard laws – a social end that keeps the tech sector freer from strict oversight, relying on voluntary principles crafted in academia. Whether that sufficiently protects the public interest is debated.

In sum, these university AI initiatives are powerfully shaping the research agenda and sending strong signals about what the future of AI should look like. They generally envision AI that is pervasive, interdisciplinary, and (aspirationally) ethical and beneficial. But by choosing certain emphases, they do steer us along specific paths:

  • A path of rapid AI innovation (with universities as hubs of talent and discovery) which assumes the benefits outweigh the risks – and hence places urgency on advancing AI.
  • A path of deep academia-industry integration, meaning future AI will likely emerge from networks that include both campuses and corporations, blurring lines between public knowledge and private enterprise.
  • A path that often aligns with national strategies (U.S. maintaining leadership, Europe creating a “third way” of trusted AI, etc.), thereby linking AI’s progress to geopolitical goals.
  • And a path where ethical considerations are acknowledged, though the true test will be whether those considerations significantly redirect any AI research or just accompany it.

The investments do not irreversibly lock society into one political outcome, but they do create momentum. For example, once billions are spent on AI labs and supercomputers, there is an inertia to use them for ever more ambitious projects – possibly leading toward more surveillance technology or military use unless consciously guided otherwise. The hope from the human-centered camps is that by baking in ethics and diversity from the start, the momentum will be guided toward socially positive uses and not dystopian ones.

Lastly, it’s worth noting that these efforts also spur critical discourse. The fact we are analyzing them is a sign that society is pondering: Are we doing AI right? Some academics within and outside these institutes advocate for slowing down certain AI research (e.g., very risky AI) or focusing on AI that empowers the marginalized. Whether those voices gain influence could be a decisive factor in what social ends are ultimately served. The funding itself is politically neutral in theory, but how it’s directed – by which leaders and with what oversight – will determine if we end up with AI that reinforces existing power structures (Big Tech, surveillance state, etc.) or challenges and changes them for the better.

Sources:

  • MIT News: MIT’s $1 billion Schwarzman College of Computing and its mission of interdisciplinary, ethical AI[1][2].
  • Stanford University News: Launch of the Human-Centered AI Institute to “advance AI research, education, policy and practice to improve the human condition.”[85][9]
  • Harvard Magazine / News: $500M Chan-Zuckerberg gift to Harvard’s Kempner Institute for Natural and Artificial Intelligence[13][14].
  • USC press release / Forbes: USC’s $1B “Frontiers of Computing” initiative spanning new School of Advanced Computing and campus, with emphasis on ethics and computing for all disciplines[15][95].
  • Johns Hopkins Hub: JHU’s major investment in a data science and AI institute to fuel discovery in areas from medicine to humanities, hiring 80 faculty and focusing on data, AI, and their risks[21][25].
  • UIUC/IBM announcement: $200M IBM-Illinois Discovery Accelerator Institute for AI, quantum, cloud and its 10-year collaboration model[37][36].
  • C3.ai DTI (Illinois press & Microsoft news): $57M cash + $310M in-kind partnership of C3.ai, Microsoft, and leading universities to fund AI research for societal challenges (COVID-19 being first target)[38][96].
  • NSF announcements: Over $500M invested by 2023 in National AI Research Institutes at dozens of universities, each tackling themes like agriculture, education, climate, with the aim of securing U.S. leadership in AI[42][44].
  • UKRI news: £300M for a new AI Research Resource (AIRR) including Cambridge’s “Dawn” supercomputer and Bristol’s Isambard-AI, to boost UK’s AI capacity and support safe AI research[46][48].
  • Oxford University news/The Guardian: £150M Schwarzman donation to Oxford creating the Institute for Ethics in AI (2019), the largest gift to Oxford in modern times[53][97].
  • Cambridge release (AI Business): Cambridge’s center for responsible AI research launched to promote ethically sound AI technologies[55].
  • EE Times / French gov: France’s Phase 2 AI strategy with €2.2B investment by 2025, expansion of trusted-AI funding from €45M to €271M[58].
  • Global Venturing (2024): Cyber Valley launched with €165M public-private funding, linking universities with Amazon, Bosch, etc., with an independent ethics committee for oversight[62][64]. Highlights Cyber Valley’s alternative model emphasizing public accountability and startup creation[98][66].
  • EPFL News (2023): ETH Zurich and EPFL founding the Swiss National AI Institute, funded by government and aiming to build large foundation models aligned with Swiss values (open, transparent, trustworthy)[73][75]. Plans to open-source models and apply them in healthcare, science, etc.[78][79].
  • Tech Transparency Project (2023): Report on Meta and Chan Zuckerberg’s extensive funding of academic institutions and the potential influence – “hundreds of millions of dollars to more than 100 U.S. colleges”[87], including details on the Harvard controversy and concerns that research agendas may align with donor corporate goals[88][91].

[1] [2] [3] [4] [5] [6] [34] [82] MIT reshapes itself to shape the future | MIT News | Massachusetts Institute of Technology

https://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015

[7] [8] [9] [10] [11] [83] [85] Stanford University launches the Institute for Human-Centered Artificial Intelligence | Stanford Report

https://news.stanford.edu/stories/2019/03/stanford_university_launches_human-centered_ai

[12] [PDF] Stanford Institute for Human-Centered Artificial Intelligence

[13] Chan Zuckerberg Commits $500 Million to Harvard Neuroscience …

https://www.harvardmagazine.com/2021/12/chan-zuckerberg-natural-and-artificial-intelligence

[14] [87] [88] [89] [90] [91] [92] [93] TTP – Zuckerberg and Meta Reach Deep into Academia

https://www.techtransparencyproject.org/articles/zuckerberg-and-meta-reach-deep-into-academia

[15] USC president launches $1B initiative for computing including AI …

[16] AI at USC – USC Participation in the US AI Safety Institute Consortium

[17] USC Launches $1 Billion AI Literacy Initiative – Bestcolleges.com

https://www.bestcolleges.com/news/usc-ai-literacy-initiative/

[18] USC launches first new school in 10 years – Annenberg Media

https://www.uscannenbergmedia.com/2024/02/05/usc-launches-first-new-school-in-10-years/

[19] [95] A new frontier: Why USC is investing $1 billion into advancing …

[20] USC receives $12 million for ethics institute focused on AI computing

https://philanthropynewsdigest.org/news/usc-receives-12-million-for-ethics-institute-focused-on-ai-computing

[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [94] Johns Hopkins makes major investment in the power, promise of data science and artificial intelligence | Hub

https://hub.jhu.edu/2023/08/03/johns-hopkins-data-science-artificial-intelligence-institute/

[31] IBM and MIT to pursue joint research in artificial intelligence …

https://news.mit.edu/2017/ibm-mit-joint-research-watson-artificial-intelligence-lab-0907

[32] IBM has given $240 million for a new AI research lab at MIT – Axios

https://www.axios.com/2017/12/15/ibm-has-given-240-million-for-a-new-ai-research-lab-at-mit-1513305403

[33] IBM to invest $ 240 million to create AI lab in partnership with MIT

[35] [37] IBM and Illinois launch Discovery Accelerator Institute

https://impact.strategicplan.illinois.edu/ibm-and-illinois-launch-discovery-accelerator-institute/

[36] University of Illinois and IBM Researching AI, Quantum Tech

https://www.govtech.com/education/higher-ed/university-of-illinois-and-ibm-researching-ai-quantum-tech

[38] COVID-19 first target of new AI research consortium

https://vcresearch.berkeley.edu/news/covid-19-first-target-new-ai-research-consortium

[39] [96] $367 million C3.ai Digital Transformation Institute Launches

https://grainger.illinois.edu/news/magazine/c3ai

[40] C3.ai, Microsoft, and leading universities launch C3.ai Digital …

https://news.microsoft.com/source/2020/03/26/c3-ai-microsoft-and-leading-universities-launch-c3-ai-digital-transformation-institute

[41] New research institute puts $5 million toward AI to battle COVID-19 …

https://www.post-gazette.com/business/tech-news/2020/06/23/artificial-intelligence-covid19-machine-learning-algorithms-bias-virus-response-carnegie-mellon-university-microsoft/stories/202006220097

[42] [43] NSF AI Institutes continue creating groundswell of innovation

https://www.nsf.gov/science-matters/nsf-ai-institutes-continue-creating-groundswell

[44] National Artificial Intelligence (AI) Research Institutes

https://researchfunding.duke.edu/national-artificial-intelligence-ai-research-institutes-accelerating-research-transforming-society

[45] National AI Research Institutes – Artificial Intelligence – NSF

https://www.nsf.gov/focus-areas/ai/institutes

[46] [47] [48] [49] [50] [51] [52] [56] £300 million to launch first phase of new AI Research Resource – UKRI

https://www.ukri.org/news/300-million-to-launch-first-phase-of-new-ai-research-resource/

[53] Who we are… – Oxford Institute for Ethics in AI

http://www.oxford-aiethics.ox.ac.uk/about-the-institute

[54] [97] £150 million donation to Oxford by US billionaire Stephen …

[55] University of Cambridge launches responsible AI research center

https://aibusiness.com/responsible-ai/university-of-cambridge-launches-responsible-ai-research-center

[57] Foundations for trustworthy artificial intelligence | ETH Zurich

https://ethz.ch/en/news-and-events/eth-news/news/2020/09/foundations-for-trustworthy-artificial-intelligence.html

[58] Artificial Intelligence: €2 billion for the second phase of the national …

https://www.actuia.com/en/news/artificial-intelligence-e2-billion-for-the-second-phase-of-the-national-strategy/

[59] France as a European Leader in Artificial Intelligence

[60] Artificial and Natural Intelligence Toulouse Institute – ANITI

https://www.aerospace-valley.com/artificial-and-natural-intelligence-toulouse-institute-aniti

[61] France’s Macron announces $113 billion AI investment plan in …

https://www.mitrade.com/insights/news/live-news/article-3-630988-20250211

[62] [63] [64] [65] [66] [67] [70] [98] Europe’s Cyber Valley wants to make its alternative AI approach global –

[68] Amazon and Max Planck Society collaboration on research in …

https://www.aboutamazon.eu/news/press-lounge/amazon-and-max-planck-society-collaboration-on-research-in-artificial-intelligence

[69] Amazon will invest $1.5M in Germany’s Cyber Valley AI research hub

https://siliconangle.com/2017/10/23/amazon-will-invest-1-5m-germanys-cyber-valley-ai-research-hub/

[71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [86] EPFL and ETH Zurich enhance collaboration to boost AI in Switzerland – EPFL

https://actu.epfl.ch/news/epfl-and-eth-zurich-enhance-collaboration-to-boost/

[81] 2022 Readers’ & Editors’ Choice Awards – Best HPC Collaboration

[84] France to Invest €2.2B in AI by 2025 – EE Times Europe

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending