🌱 Seedling noteworthy

A really great run-through of some top level privacy concerns around AI

posted on in: Privacy, privacy and ai.
~175 words, about a 1 min read.

This article is a really great walk through of some of the major privacy issues that people building with AI should be concerned with. This is especially useful if you are doing your own model training.

The heading-level points are:

  • AI models memorize their training data
  • AI models then leak their training data
  • Ad hoc protections don't work
  • Robust protections exist, though their mileage may vary
  • The larger the model, the worse it gets
  • Bonus thing: AI companies are overwhelmingly dishonest

There a lot of things that privacy experts know and agree on about AI… but might not be common knowledge outside our bubble.

That seems the kind of thing I should write a blog post about!
[...]
I think most privacy experts would agree with this post so far. There are divergences of opinion when you start asking "do the benefits of AI outweigh the risks". If you ask me, the benefits are extremely over-hyped, while the harms (including, but not limited to, privacy risks) are very tangible and costly.



— Via Damien Desfontaines, Five things privacy experts know about AI - Ted is writing things
Page History

This page was first added to the repository on January 14, 2025 in commit dc094d16. View the source on GitHub.