I have mentioned before in several posts how remote control war exacerbates the creation of new enemies precisely because imprecision increases collateral damage of a sort other than bombs with too much boom (beyond the nominal target). This piece from ArsTechicaUK reiterates that point with how one program in particular had problems; even though its point of existence was to make sifting through mountains of data simply a machine learning problem. Maybe the problem here is not that machines can't learn, but that we refuse to.
Guilt by assumed association is always going to be inherently inaccurate when you don't have actual human assets on the ground, and in the loop of those associations, to make the final call on whether its full fledge collaboration, of just calling because you know the person. And make no mistake. our list of enemies is growing rapidly enough via obvious mistakes in observing what is going on around us, let alone in what is inherently opaque.
No comments:
Post a Comment