Events

Upcoming Events at LMU in AI

  • Online Module • 1 to 9 October 2022 The Public Good Statistics: A Reflective Introduction by Walter J. Radermacher

    This self-paced learning module is part of the series “Statistics for the Public Good – Infrastructure Decision Making, Research and Discourse”, which introduces participants to public statistics as a process in which the design, production and communication of information (statistics) are an integral part. As with other products (architecture, furniture, food, cars, smartphones, etc.), the aim is to optimize the design (form) in relation to the use (function) of the products (“form follows function”). Learn about the DNA of official statistics, what quality means and how to achieve it, especially under the challenges of modern societies. In times of data revolution, artificial intelligence and a ubiquitous flood of permanently produced and consumed information, orientation and differentiation of good and less good information, of fact-checks, and of verification of statistical evidence is crucial. The public infrastructure called "official statistics" plays an essential role in this respect. You will gain access to this online module, when you participate in the in-person workshops of the series.

  • Lecture • 5 October 2022 Munich AI Lectures: Max Welling

    There is an interesting new field developing at the intersection of the physical sciences and deep learning, sometimes called AI4Science. In one direction, tools developed in the AI community are used to solve problems in science, such as protein folding, molecular simulation, and so on. But also in the other direction, deep insights from mathematics and physics are inspiring new DL architectures, such as Neural ODE solvers, and equivariance. In this talk, I will start with mapping out some of the opportunities at this intersection and subsequently dive a little deeper into PDE solving. Also, in this subfield, cross fertilization has already happened both ways: people have used DL tools to successfully solve PDE equations much faster than with traditional solvers. But reversely, there are also efforts to use PDEs as “infinite width”, functional representations of layers in a deep NN architecture. The latter is helpful for instance to become independent of gridding choices. In the second half of this talk, I will explain our most recent efforts to solve PDEs faster and more accurately using DL, and reversely, new ways to use PDEs as an approximate equivariance prior.

  • Workshop • 10 October 2022 The Public Good Statistics: Let’s talk about Data Culture! by Walter J. Radermacher

    This workshop is part of the series “Statistics for the Public Good – Infrastructure Decision Making, Research and Discourse”, which introduces participants to public statistics as a process in which the design, production and communication of information (statistics) are an integral part. As with other products (architecture, furniture, food, cars, smartphones, etc.), the aim is to optimize the design (form) in relation to the use (function) of the products (“form follows function”). You will get to know the conditions for evidence-based policy to contribute to shaping the transformation processes that arise in times of crisis and to reducing social conflicts to their minimum. In addition to its function as a common language for public (national and international) discourse, public statistics also serve as a data source and partner for research on individual and social behavior. Convention Theory (“Économie des Conventions”) will be used as a conceptual guide for the exercises and explorations of statistical terrain.

  • Workshop • 11 October 2022 Should Government Data Concern or Serve Us? by Julia Lane

    This workshop is part of the series “Statistics for the Public Good – Infrastructure Decision Making, Research and Discourse”, which introduces participants to public statistics as a process in which the design, production and communication of information (statistics) are an integral part. As with other products (architecture, furniture, food, cars, smartphones, etc.), the aim is to optimize the design (form) in relation to the use (function) of the products (“form follows function”).
    There has been a marked surge in the way in which data and evidence are being used in new ways to inform policy in the United States. Although many blue ribbon committees are established whose recommendations are ignored, the U.S. Commission on Evidence-based Policymaking (Evidence Commission) has been one notable exception.Established in 2016, 11 of the 22 recommendations were enacted into law in 2018, including establishing or reinforcing leadership positions, planning processes, data sharing authorities, and privacy protections necessary to modernize the national evidence-building infrastructure. An advisory committee established in the law currently is mulling how to implement the rest, given the twin goals of increasing the value of data for evidence building through access while also ensuring the continued trust of data providers – trust that the access to data will generate evidence that improves policies, and trust that privacy will be respected and confidentiality will be protected. Yet, the interest and need is so great, exciting activities are already underway.
    Historically, states in the U.S. have been remarkably effective in their use of data. As far back as 1932, U.S. Supreme Court Justice Louis Brandeis argued that states could be “laboratories” of experimentation; that is, states could test the effects of different policies, determine what worked and what didn’t, and lead the way to national programs. States have proven Justice Brandeis right time and again – from Massachusetts’s experiment with health care reform to California’s pollution controls. Now states are innovating and experimenting with ideas about how to best use data to produce evidence and inform policy. A recent conference “Multi-State Data Collaboratives: from Projects to Products to Practice” provided a glimpse into a future of new types of collaborations, new types of measurement and new ways of protecting privacy. The impact of many state programs – training, human services, criminal justice, and education – is often measured by the labor market outcomes of the individuals they serve, yet each state’s data ends at state lines. That situation has posed problems for states that know their residents often cross state lines to go to school, work, and unfortunately, become incarcerated, particularly when population centers are near to state borders.
    However, a secure data sharing platform, established with federal dollars as a possible blueprint to inform the Evidence Commission at the start of its deliberations, has proven to be wildly successful in providing the opportunity for states to share highly sensitive data across state lines. With additional philanthropic and state funding, the data sharing platform provided the core infrastructure needed to enable the establishment of a MidWest state collaborative in 2018 and a series of cross-state data collaborations. It is a blueprint, based on a five safes framework, that can serve as a roadmap to additional collaborative activities to propel evidence-building forward at an accelerated pace – and how they can lead to new, critically-needed measurements resulting from the massive changes in the economy and society.
    This workshop will provide a discussion of the five safes framework in helping conceptualize and implement the joint determination of risk and utility. It will describe the Coleridge Initiative’s use of the US FedRAMP framework, as well as the FedRAMP approach in more detail, in terms of minimizing risk. It will then work through the role of training classes in creating value.
    The workshop will feature hands-on examples of how the training class worked with active discussions of what might or might not be applied to the German context.

  • Workshop • 12 October 2022 Values, Ethics and What They Mean for Quality by Walter J. Radermacher

    This workshop is part of the series “Statistics for the Public Good – Infrastructure Decision Making, Research and Discourse”, which introduces participants to public statistics as a process in which the design, production and communication of information (statistics) are an integral part. As with other products (architecture, furniture, food, cars, smartphones, etc.), the aim is to optimize the design (form) in relation to the use (function) of the products (“form follows function”).
    New data sources and data science methods open up substantial opportunities for research and for improving statistics. However, the integration of traditional and newer methods requires more than the merging of methodology and technology. Rather, it is also a matter of further developing the other dimensions of good information quality, namely those of infrastructure, language and values, simultaneously in an integrative manner in this sense.
    Various use cases relevant to current policy at international and national level will be used, in which the course participants will act as statistical stakeholders with distributed roles.

  • Workshop • 13 October 2022 Data 4 Policy: Is the Statistical Era Being Replaced by an Era of Data? by Walter J. Radermacher

    This workshop is part of the series “Statistics for the Public Good – Infrastructure Decision Making, Research and Discourse”, which introduces participants to public statistics as a process in which the design, production and communication of information (statistics) are an integral part. As with other products (architecture, furniture, food, cars, smartphones, etc.), the aim is to optimize the design (form) in relation to the use (function) of the products (“form follows function”).
    In this respect it is about more than just the application of statistical methods. Rather, the focus must be on the questions that a society wants to have answered with solid statistics for its current, pressing and conflict-laden issues. Other aspects then play a role here, namely whether politics values and finances this infrastructure, whether corresponding data literacy is available in the population at large, and so on.
    Various use cases relevant to current policy at international and national level will be used, in which the course participants will act as statistical stakeholders with distributed roles.

  • Panel Discussion • 25 October 2022 Next Generation AI: The Future of Astrophysics

    Panel Discussion with Daniel Gruen (LMU), Lukas Heinrich (TUM) and Kevin Heng (LMU)
    Moderation: Jenny Sorce (IAS, Paris-Saclay/CAS Fellow)

  • Lecture • 2 November 2022 Munich AI Lectures: Michael Bronstein

    Topic TBD

  • Panel Discussion • 23 November 2022 Next Generation AI: Artificial Intelligence in Conflict Research

    Nils B. Weidmann (Konstanz/CAS Fellow)
    Moderation: Gitta Kutyniok (LMU)

  • Panel Discussion • 1 December 2022 Next Generation AI: Writing with Artificial Intelligence

    Panel Discussion with Barbara Plank (LMU), Mario Haim (LMU), Uli Köppen (BR) and Hinrich Schütze (LMU)
    Moderation: Lena Bouman (CAS)

  • Panel Discussion • 2 February 2023 Next Generation AI: The Impact of AlphaFold on Protein Research

    Panel Discussion with Karl-Peter Hopfner (LMU) and Alexander Pritzel (DeepMind)
    Moderation: Julia Merlot (SPIEGEL)

  • Panel Discussion • 6 February 2023 Next Generation AI: Visualizing (Bio-)Medicine with Artificial Intelligence

    Panel Discussion with Laura Busse (LMU), Frederick Klauschen (LMU) and Björn Menze (Zürich)
    1 Moderation: Björn Ommer and Michael Ingrisch (LMU)

Subscribe to our newsletter

Wonach suchen Sie?