Claudio Calvino, Ph.D.
London Area, United Kingdom
3K followers
500+ connections
View mutual connections with Claudio
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Claudio
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Claudio’s full profile
Other similar profiles
-
Daniel Hamilton
Greater LondonConnect -
Miguel Fierro
United StatesConnect -
Dr Chris Brauer
LondonConnect -
Felix Sanchez Garcia
Greater Cambridge AreaConnect -
Ian Jones
Strategist & Implementer I Business Transformation Expert I Board & iNED Experience I Portfolio, Programme & Project Management Professional I Accredited GCologist at The GC Index®
RugbyConnect -
Ian West
United KingdomConnect -
Yomi Tejumola
United KingdomConnect -
Iman Karimi, Dr.-Ing.
United KingdomConnect -
Habib Amir
LondonConnect -
Tobias Preis
LondonConnect -
Christopher Haley
London Area, United KingdomConnect -
Karthik Balisagar
London Area, United KingdomConnect -
Olly Benzecry
Chairman, non-Exec director and advisor
United KingdomConnect -
Aditi Banerjee
United KingdomConnect -
Elizabeth Adams
LondonConnect -
Gyanee Dewnarain
LondonConnect -
Blaise Grimes-Viort
LondonConnect -
John Pinkard
Providing Sustainable Mobility Solutions | Transforming Public, Private, & Third-Sector Transport Initiatives | Strategic & Creative Thinker | Advocate for Sustainable Travel & Climate Solutions | MD at Ansons Consulting
United KingdomConnect -
Benoit Reillier
Managing Director at Launchworks & Co, Chair of Platform Leaders. Author ‘Platform Strategy’ and ‘Mission BlaBlaCar’. I help design, launch, scale and manage platform businesses.
United KingdomConnect
Explore more posts
-
Antonio Weiss
🚀 Exciting Findings from DARE UK's Latest Workshop! 🚀 We are thrilled to share the key insights from the DARE UK "Scientific Use Cases Workshop Report" which The PSC were delighted to support. This comprehensive workshop brought together 48 researchers and 7 public participants, surfacing 52 use cases for cross-domain sensitive data research. Here are the highlights: - Diverse Data Sources: 69% of use cases linked data across three or more domains, with 80 distinct data types identified. - High Impact Areas: Over half of the use cases addressed long-standing societal challenges like social inequality (27%) and climate change (15%). - Innovative Approaches: 38% of use cases combined data at the family or household level, emphasizing the importance of social environments on health outcomes. 🌟 Top Use Cases: - Transforming the Food Economy: Estimated benefits of £118.4bn by 2050. - Addressing Domestic Abuse: Potential economic benefits of £17.66bn. - Reducing NHS Bottlenecks: A 5-10% capacity increase could yield £20.92bn in benefits over the next 5 years. - Improving Vaccine Uptake: Enhancing uptake by 5-10% could add £1.4bn in benefits. The workshop's economic benefit modeling suggests a total potential benefit of £319.11bn (+/- £79.14bn) by 2050 across these prioritized use cases. This report underscores the transformative potential of connecting diverse data types to tackle some of the UK's most pressing challenges. Read the full report for detailed insights and the exciting future of data research in the UK. #DAREUK #DataResearch #Innovation #PublicGood #ScientificUseCases
10 -
Francesca Rossi
Make sure you provide your opinion on risk thresholds for advanced AI systems. We need everybody to weigh in. Deadline is September 10th. To me, compute threshold are not a useful metric for identifying safety risks. AI safety is a context-dependent evaluation of multiple factors, not a distinct property of a model. Also, passing a compute threshold does not necessarily indicate the presence of dangerous capabilities. In fact, it may well be that more compute will help in achieving higher levels of safety. Also, recently greater levels of performance are being achieved with smaller and smaller models. Rather than setting thresholds (based on compute or other), the evaluation of capabilities and, even more importantly, limitations of AI systems is a much better indication of issues that can turn in possible risks when the model will be used. Work on evaluation is ongoing in both academia and corporate environments, and there is still not a single unique way to do it. However, the danger in running a certain risk is very related to the deployment and use scenario. Even model with powerful capabilities or serious limitations can be safe to use in certain scenarios but dangerous in others. What is your opinion? Whatever it is, you should upload it to the OECD.AI site for the public consultation (https://lnkd.in/dKDP_4za). I already uploaded mine! #ai #airisks #advancedai #computethresholds
12012 Comments -
Gemma Tetlow
NEW Institute for Government report, with Grant Thornton UK, looks at capital spending in public services and how it can be done better. TL;DR To improve public services, next govt will need to invest in equipment/buildings more than the past. But it will need to spend better too. Why look at this? There is increasing evidence of poor quality capital in public services, and that this is affecting the day-to-day running of services. Fixing this will need to be a priority for the next government. To read the full report: https://lnkd.in/eQ-WAJrh
172 Comments -
Matt Davies
There's a lot happening in UK AI policy at the moment: 🏭 The new Government has published a green paper and consultation on its industrial strategy, which identifies ‘the rapid development of AI’ as a key opportunity for the UK. 💼 The Chancellor, Rachel Reeves, has just delivered the first Labour budget in 15 years, which reaffirmed the Government’s commitment to AI-related initiatives such as the proposed ‘National Data Library’ and announced a review of barriers to the adoption of 'transformative technologies' including AI. 🚀 In the coming weeks, we’re expecting the publication of an ‘AI Opportunities Action Plan’ and a new consultation on AI regulation. Taken together, do all these developments add up to a coherent industrial approach to AI? If so, what’s new and distinctive about this Government’s approach compared to its predecessor – and what’s next? On Monday 11th I'll be discussing this and more with three fantastic panelists: - Amba Kak, AI Now Institute - Haydn Belfield, Centre for the Study of Existential Risk, University of Cambridge / Leverhulme Centre for the Future of Intelligence - Mary Towers, Trades Union Congress (The TUC) Register now to join the conversation next week 👇 https://lnkd.in/efyqNvcS
573 Comments -
James Smith
Cognitive dissonance, based on fear, can override critical-thought, stoke societal division and make us lose sight of the truth. Locked in the struggle between moderation and free-speech, social media platforms take a lot of the blame, but can only do so much. At some point it is up to us to call out bias, spin and journalistic subjectivity. PGI's latest blog discusses the crucial role attribution plays in identifying disinformation. #trsutandsafety #integrity #onlineharms #onlinesafety #disinformation #digitalinvestigations
5 -
Reema Patel
Your regular reminder of the importance of inclusive data stewardship. Examples such as the below reveal how far we have to go until data governance truly works for everyone. And of course the implications are significant, given the acceleration of AI technologies driven by data. Initiatives like Biobank rely heavily on social contracts and data donation, and careful thought and consideration about how data sets might be used. I hope this sparks a more thoughtful conversation about rights, safeguards, access and control, and also about who best to engage in these conversations. https://lnkd.in/ecwczYSy
20 -
Max Ghenis
We've just launched two new reports showcasing PolicyEngine's unique behavioral response capabilities. 1. In the UK, only PolicyEngine provides public access to capital gains microsimulation analysis--and we model capital gains responses, to boot. With a -0.5 elasticity, behaviour halves our revenue projection from the capital gains tax increase that the Labour Party (UK) is expected to announce in the Autumn Budget next week. https://lnkd.in/esQHEtkD 2. In the US, we model behavioral effects of federal tax reforms--and how they intersect with benefits and state taxes. In this case, we project that Kamala Harris's proposal to expand the Earned Income Tax Credit for filers without qualifying dependents will (applying Congressional Budget Office elasticities): a) Cost state and local governments about $2 billion per year, due both to their EITC matches and the feedback effects from earnings changes; and b) Cost the federal government about $2 billion per year in additional spending on benefits like SNAP, because the phase-out lowers earnings. https://lnkd.in/eWzupGwC We're grateful to Arnold Ventures for supporting our work to add behavioral responses to PolicyEngine, enabling anyone to quickly test different policies and elasticities with our comprehensive open-source model and web app. We'll be sharing our full methodology soon, in the meantime always glad to answer questions on it.
11 -
Sam Stockwell
One of the biggest observations I identified across this “super year of elections” involved the increasing challenges citizens now face in discerning the difference between real and synthetic content, as well receiving credible, verified information sources online. It’s therefore brilliant to see that Elizabeth Seger and Demos are leading an important new project on how we can restore and enrichen the quality of our information ecosystem.
4 -
Sandra Hamilton
#2024SocialValueConference Transformation requires change. From Market Purchasing to System Stewardship. From Procurement to Partnership. From Competition to Collaboration. Additional Social Value V Inherent Social Value. We think it's time for the UK public sector to separate the competitive processes used for market purchasing from the system stewardship and collaborative commissioning needed to transform the delivery of public services, especially complex human services. And we're not talking small numbers. £123 Billion – Total Local Authority Spend Including adult social care, children’s social care and housing services, local authorities’ total net current service expenditure was £123.4 billion in 2023-24. + £1.5 Billion Increase in adult social care in 2023-24 by English local authorities 75% of Local Government expenditure is now spent on Social Care Looking forward to our VCSE panel discussion at 2pm tomorrow, Wednesday, October 16th, 2024 at the National Social Value Conference. Please join Mark Simms OBE, P3 Charity; @Laura McGann, Family Action; Julian Blake, Stone King; Ben Carpenter, CEO Social Value International and myself for this important discussion, with concluding remarks provided by Claire Dove CBE DL VCSE Panel Recommendations - Our Open Letter to the LGA is available here: https://lnkd.in/eV_tuxRj If you would like to add your support to the letter you can do so at the Google link below https://lnkd.in/edb5A6KV #SocialValue #Procurement #Commissioning #SocialServices #PublicSectorTransformation
8 -
Kevin O'Sullivan
Really great insights here for data obsessives like myself. Also, refreshing for a company to take a step back and not be afraid to criticise the status quo. Well worth your time if you believe investment in data can deliver not only better services - in this case transport - but can really empower people to make better and more informed decisions in their everyday lives. I'd recommend reading the article and the white paper. 👇
7 -
EU AI Act
🚀 Exciting News in AI Policy! 🚀 OECD.AI has just launched a public consultation on risk thresholds for advanced AI systems. I'm thrilled that we're contributing to this significant effort! Background: The Seoul Ministerial Statement ([link](https://lnkd.in/erQH2Fz9)) and the Frontier AI Safety Commitments ([link](https://lnkd.in/eKCJJGcp)) both highlight the critical importance of establishing risk thresholds for advanced AI systems. Despite this, determining adequate thresholds and the methods for setting them remains an ongoing challenge. This consultation represents a crucial step towards creating robust guidelines that ensure the safe and ethical deployment of advanced AI technologies. By participating in this initiative, we can help shape the future of AI governance and contribute to a safer digital world. I encourage everyone involved in AI development and policy to participate and share their insights. Public consultation: If you want to participate, please visit the OECD website (https://lnkd.in/esrxtMyJ) and respond to the following questions: (1) What publications and/or other resources have you found useful on the topic of AI risk thresholds? (2) To what extent to you believe AI risk thresholds based on compute power are appropriate to mitigate risks from advanced AI systems? (3) To what extent do you believe that other types of AI risk thresholds (i.e., thresholds not explicitly tied to compute)would be valuable, and what are they? (4) What strategies and approaches can governments or companies use to identify and set out specific thresholds and measure real-world systems against those thresholds? (5) What requirements should be imposed for systems that exceed any given threshold? (6) What else should the OECD and collaborating organisations keep in mind with regards to designing and/or implementing AI risk thresholds? #AI #OECD #AIPolicy #RiskManagement #AIConsultation #Innovation #EthicsInAI #FutureOfTech
131 Comment -
Daniel Spichtinger
In April I had the possibility to publish an analysis of trends in #dataprivacy legislation in the global North for the LSE Impact Blog . Across different jurisdictions within and outside Europe we see a shift towards more stringent, #GDPR-aligned regulatory environments. These changes pose both challenges and opportunities for researchers and policymakers, especially in cross-jurisdictional data sharing https://lnkd.in/dVrEUBnr #DataProtection #Research #GDPR #DataPrivacy #CrossBorderDataHandling #EU #UK #US #Switzerland
9 -
Alexander Iosad
AI regulation debates are rife with kneejerk reactions. Ban it all! Let it run free and wild! Existential risk! Immediate harms! So it’s an absolute delight to read this thoughtful and balanced position paper from my Tony Blair Institute for Global Change colleague Jakob Mökander and co-authors Helen Margetts, Keegan McBride, Nitarshan R. and Robert Trager. They convincingly argue that we need flexibility, international alignment, compliance incentives and a clear distinction between regulation and the role of the AI Safety Institute. I’m particularly pleased to see recommendations on developing AI-specific regulatory capacity for different sectors – the impact of AI on education, and its risks, differ from its role in healthcare or in the delivery of benefits. Use-case-specific risks need deep expertise in the use cases. Helping relevant regulators better understand AI, and ensuring relevant departments have AI expertise at the senior levels (for example, as we’ve recommended before, by appointing DG-level Chief AI Officers), would go a long way to developing approaches that are sensitive to the needs of each sector.
7 -
OECD.AI
Public consultation on risk thresholds for advanced AI systems 📅 DEADLINE EXTENSION: 1 OCTOBER https://lnkd.in/e98Pzw-b The OECD is collaborating with diverse stakeholders to explore potential approaches, opportunities, and limitations for establishing risk thresholds for advanced AI systems. To inform this work, we are holding an open public consultation to obtain the views of all interested parties. We are interested in hearing your thoughts on the following key questions: ❓ What publications or other resources have you found helpful on AI risk thresholds? ❓ To what extent do you believe AI risk thresholds based on compute power are adequate and appropriate to mitigate risks from advanced AI systems? ❓ To what extent do you believe other AI risk thresholds would be valuable, and what are they? ❓ What strategies and approaches can governments or companies use to identify and set specific thresholds and measure real-world systems against those thresholds? What requirements should be imposed for systems that exceed any given threshold? ❓ What else should the OECD and collaborating organisations consider concerning designing and/or implementing AI risk thresholds? 📅 10 SEPTEMBER DEADLINE TO PARTICIPATE https://lnkd.in/e98Pzw-b Francesca Rossi Stuart Russell Michael Schönstein Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Pablo Gomez Ayerbe Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. #airisk #aisafety #trustworthyai #oecd #risk
1285 Comments -
Francis Ruiz
https://lnkd.in/eQVxfnms The UK Treasury (the Finance Ministry) has just released a document setting out “Areas of Research Interest”. (H/T Gemma Tetlow Institute for Government.) I thought it useful to pick out the following which may be relevant to some of my connections on this site: 1. Growth 1.17 What are the growth effects of spending on health and education? 2 Labour market 2.1 What is driving the rise in health-related labour market inactivity? And what policies could be used to tackle this? 6. Tax 6.9 Estimates of the impact of public health taxes, such as the soft drinks industry levy, on social outcomes including health outcomes, in the UK and internationally? 7. Public Spending and Public Services 7.4 How can the government better understand the interactions between different areas of spending, for example between early years, health and welfare? 7.7 How can we increase productivity within public services and improve outcomes for service users, including through a stronger focus on prevention? 7.9 How do we identify the most effective areas of preventative spending? What do other countries do? 7.17 How can organisations, with a particular focus on public bodies, most effectively learn lessons from (i) crisis response; and (ii) systemic policy problems; and address them? 7.20 How effective is spending through financial benefits compared to service provision in supporting different groups of people? And perhaps it wouldn’t surprise anyone to know that the Treasury are interested in “global trends in tariff policy”….
11 Comment -
Stephen Abbott Pugh
After years of experimentation, Friday 29 November 2024 will see the closure of the Open Ownership Register. I’ve written a brief history of efforts to make it easier to search, explore and visualise high-quality #beneficialownership data https://lnkd.in/eQqUappZ First starting life as WhoControlsIt back in 2014 and created by the OpenCorporates team, the Open Ownership Register went on to help people use beneficial ownership data from Armenia, Denmark, Slovakia, Ukraine, and the United Kingdom https://lnkd.in/e6mhqg_N It grew to encompass data explaining more than 33 million beneficial ownership relationships between over 11 million companies and 10 million beneficial owners. Over the years, Open Ownership has sought to show how to operationalise and iterate on the usage of standardised and interoperable beneficial ownership data. We hope that this publication and our open source code will provide valuable lessons to others. #openstandards #opendata
464 Comments -
Jakob Mökander
The UK government is considering binding regulations for frontier-AI. What should the “AI bill” look like? And why does it matter? In our new report “Getting the UK’s Legislative Strategy for AI Right”, the Tony Blair Institute for Global Change outlines policy recommendations for how the government can strengthen the UK's innovation-friendly, sector-specific approach as it ponders an AI bill focusing narrowly on frontier-AI safety. The paper is co-authored with Helen Margetts, Robert Trager, Keegan McBride and Nitarshan R. Read the full paper here: https://bit.ly/4dWCYnA Key points include: 🏛 Existing regulators require additional resources, capabilities and powers to ensure good AI governance within their respective sectors 🔬 AISI should not become a regulator, but an independent technical body focused on advancing scientific understanding and develop novel tools for model evaluation and AI risk mitigation 🔗 Some form of common regulatory capacity will be needed to coordinate existing regulators’ work and improve transparency and accountability across the AI value chain 🌐 International collaboration on AI safety research and standards will be key to ensure both a business-friendly environment and the responsible design and use of AI 📈 Any AI bill should go hand-in-hand with incentives for innovation and investments into enabling digital infrastructure, including shared data and compute infrastructure The paper has been developed through broad stakeholder engagement. A special thanks to all who have contributed with valuable input, including: Markus Anderljung, Jack Clark, Gina Neff, Ph.D., Alexander (Sacha) Babuta, Ben Robinson, Owen Larter, Mihir Kshirsagar, Max Fenkell and Rebecca Stimson. AI regulation is difficult to get right. There is uncertainty around future AI systems' capabilities – and economic, technical, social as well as geopolitical considerations must be accounted for. No one sits with all the answers. Our joint position paper is a starting point, inviting broad conversations around how to regulate frontier AI and how to get the UK’s legislative strategy for AI right. How do you think the UK government should relate to global developments in AI regulation, including the EU AI Act Codes of Practice and evolving state legislations in the US? Oxford Internet Institute, University of Oxford, The Alan Turing Institute, Oxford Martin AI Governance Initiative, University of Cambridge. Marie Teo, Sam Sharps, Benedict Macon-Cooney, Ryan Wain, Madison Iannone, Alexander Iosad, Melanie G., Bridget Boakye, Tom Westgarth, Kevin Luca Zandermann, Guy Ward Jackson, Rasmus Fonnesbæk Andersen, PhD, Calum Handforth, Johan Harvard
1504 Comments -
Richard Welpton
Writing a really good application to access sensitive #data in a #trustedresearchenvironment #TRE depends on a good source of #metadata (amongst other factors). Emma Devine at Research Data Scotland has just published insights from a recent survey that highlights exactly why good metadata is important, and what researchers expect. 📖 Read the (2 min) blog here: 👇 https://lnkd.in/eGkTd4Zr **Metadata: what could be improved?** The survey highlighted the need for: ➡️ consistent cataloguing ➡️ consistent or standardised approaches to describe data and produce documentation and guides ➡️ ensuring that metadata is accurate at all times; and updated (including details of new data linkages, as they emerge) The results from this survey will support Research Data Scotland's efforts to simplify access to sources of data generated in Scotland; and will contribute to the Connect 4 project (funded by ESRC: Economic and Social Research Council and UK Research and Innovation as a Future Data Services pilot to achieve greater data service federation). ** Future Data Services (FDS) review** The Data Discovery and Curation theme of ESRC: Economic and Social Research Council's FDS review has found similar findings to the results of this survey. We've also spoken to those tasked with documenting and curating data to think about how they can be supported to do this to meet the needs of the #research community in the future. Almost all of ESRC: Economic and Social Research Council's data infrastructure investments are involved in data curation and metadata management. Examples include UK Data Service, Understanding Society, UCL Centre for Longitudinal Studies, CLOSER | The home of longitudinal research.
26
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More