Alexey Mitin’s Post

View profile for Alexey Mitin, graphic

Machine Learning Engineer

Narrative, shaping AI technologies similarly to nuclear ones (all these references to Manhattan like project and so on) implicitly creates an impression of something that could be contained and handled by group of bright minds, successfully taking care of using it for the good of humanity and avoiding any harm from it. However, IMO, just one illustration of how easily AI could produce complexity outside of the level of complexity approachable by top notch specialists could brake this utopian picture: 'With the 37th move in the match's second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world's best Go players, including Lee Sedol. "That's a very strange move," said one commentator, himself a nine dan Go player, the highest rank there is. "I thought it was a mistake," said the other.' AI models have so fat-tailed possible outcomes distribution, that looks like the safest way of applying of this technology is one provided by open source model, when tens of millions of engineers will be working with tens of thousands different models tuned from tens of different foundational models. Not tens of thousands in selected laboratories, but tens of millions engineers worldwide, working deeply inside the models on daily basis - IMO, that's the safer way to handle technology with such fat-tailed distribution. Your thoughts?

To view or add a comment, sign in

Explore topics