My Grandad's advice for safe driving was: "Always remember you're controlling a potentially lethal weapon". I think it helped me to take driving even more seriously.
Perhaps it's good advice for us technologists too...
"Always remember you're programming a potential platform for bullying and harassment".
"Always remember you're designing a potential system for mass surveillance".
"Always remember you're deploying a potential tool for mass social unrest".
@robertnyman @peter doesn't make it wrong though. 😃
If user data wasn't dangerous it wouldn't be worth so much.
@robertnyman @ada I still haven't watched Black Mirror yet! We don't have Netflix - but planning to some day. I've heard from @torgo that it should be mandatory watching for everyone in the tech industry! I don't think there was one catalyst in particular for the toot, more just something I've been dwelling on more lately and reading books and articles on it. A good example: http://idlewords.com/talks/ancient_web.htm
@peter you marxist.
@peter "Always remember that your program has faults and holes no one has discovered yet"
@peter "Always remember you are building something that can be used in an infinite variety of arbitrary ways by anyone at any time." — I don't know, it doesn't seem like a useful warning. Driving is something you have a control over while it happens. When you build and release things, you lose control over them, and it's not like you can embed moral safeguards in them.
@Wolf480pl @deshipu @peter there is a difference between crippeling a software, and being thoughtful when you make decisions.
Technology in not neutral as we naturally tend to favour the easiest things.
You software can be modified (especially if free) but you have the moral authority to make harder to use it in harmful way, thus encouraging positive uses (that's how safety works in every engineering field).
@webshinra @deshipu @peter so basically require stuff like --force --i-know-what-im-doing for certain actions?
@Wolf480pl @webshinra @peter "You are pointing your laser saw at your mother in law, are you sure you want to proceed?" — unfortunately, the amount of sensors and logic processing required to recognize the morally doubtful situations is staggering, and ironically, morally doubtful itself — are you going to put a human level AI in every little gadget just make it harder to use it immorally? Wouldn't that be immoral to those AIs?
@deshipu @Wolf480pl @peter You could at last put a sensor that say that you are using the saw on an hard surface, or whatever required to not cut your mother without, well, meaning it.
The designer would also probably not add a vectorial propulsion system or a fast-mover-targeting-assistant on your laser-saw.
@webshinra @Wolf480pl @peter Then you quickly arrive at https://boingboing.net/2012/01/10/lockdown.html
@deshipu @Wolf480pl @peter no, you don't.
Your confusing being defective by design and being designed for a specific kind of task.
@webshinra @deshipu @peter ok, so you want to have appliances instead of general computing? You want a separate device for watching movies that will only let you watch movies? And another one for playing games, that will only play games? You want Apple?
@Wolf480pl @deshipu @peter I don't want to use my personal computer as a screwdriver.
And, well, I'm an emacs user, which is very safe to use with it's default user's settings, and one of the most versatile monstruosity ever made.
I don't care about proprietary software ethical choices, as they are inherently inhethical.
For floss, I must remember you that the coopetition phenomena is a living one.
@webshinra @deshipu @peter but with emacs in default settings, you can write an exploit for some vulnerability. Or fake news. Or an AI that controlls killer drones. At least some of the above things are unethical. And yet emacs doesn't prevent you from doing them, does it?
@peter @deshipu @webshinra or more on the topic of the original post:
should developers of IRC clients make it harder to bully people on IRC?
should developers of SQL databases make it harder to write a query that searches the database for children under certain age in some range around a certain point?
or a query that searches for all people who posted a certain link? (that's mass surveillence)
should developers of social networks detect negative emotions in messages, and prevent these messages from reaching as wide an audience as they normally would? (for the sake of preventing social unrest)
@Wolf480pl @peter @deshipu
well, I'm an old IRC user, and I can answer you: client's dev are already doing it.
Konversation have no AROK capabilities, and emacs disabled by default.
that's a social choice. If you want it otherwise, feel free to fork either one or to make your point on the maling list equivalent.
please don't put on me anything more than what I write, it's irritating.
@webshinra @peter @deshipu sorry, but I took your opinion as supporting the original post that started this conversation. Especially that you didn't specify what you meant by "harmful way".
Also, what do you mean by AROK? I tried a web search for it but found nothing relevant.
@Wolf480pl @peter @deshipu Arok is for Auto Reco On Kick.
If I was vague, that's was not by mistake.
what should be threated as harmful or not is subjective and rely on the author conviction and knowledges.
@webshinra @peter @deshipu
I think the boingboing article cited earlier is relevant here, afterall.
rejoin-on-kick is an additional feature. You can make an IRC client without it, and it'll still be a fully functional IRC client. It's ok not to include it.
OTOH, making an IRC client that refuses to send a message if it contains a word nazi, wouldn't be good. Even if we agree that calling other people "nazi" is bad.
It wouldn't be a fully-functional IRC client if it did something like this.
@deshipu @peter @webshinra or, to get nazis out of the way, imagine an IRC client that goes out of its way to prevent you from rejoining a channel after you've been kicked, by remembering that you've been kicked and blocking the /join command for eg. 5 minutes.
That's unethical.
We shouldn't do such a thing.
Now, back on topic of this thread:
Of the things the original post is advocating for, which can be done by removing features but not crippling the core functionality?
@Wolf480pl @peter @deshipu that's your vision of functional.
choosing the default text encoding is also a social decision.
I personally think that a safe tool do not work against the user's interest (individually and collectively) and that you should make tool as safe as possible for the audience you have.
that's allways a pro/con.
the irc standard include decision having social consequences, and they certainly thought about how it will behave in the world when designing it.
@webshinra @Wolf480pl @peter I think I have a good example for this. The Linux "dd" tool, which is commonly used to write the image of a bootable disk to your USB flashdrive, but which can also, if you make a mistake, destroy your hard disk drive partition. A specialized tool that only works on flashdrives would be a better choice for all those "how to install Linux" tutorials, but it doesn't exist, so a more general and dangerous tool is used instead, sometimes leading to catastrophic mistakes.
@deshipu @Wolf480pl @peter I would be really happy to have a AI in my shell telling me that sdb is not the usb key i probably wish to dd into that iso.
please notice that:
- the input and output are explicitely required to be named if of, making hard to switch them by mistake
- the /dev/ is generally not writeable by a non root user.
that's making harder to break the system with it by mistake.
@webshinra @Wolf480pl @peter Well, you don't need an AI for that, you can just check the udev properties. The thing is, you still need a separate general-purpose dd for all the other tasks.
@deshipu @Wolf480pl @peter that's not my point.
@Wolf480pl @deshipu @peter No, it don't.
But there are no m-x make-killer-drone-ai built in either.
@peter I feel like it's a good idea to sit down and think of the worst thing someone could use your technology for, and try to build in ways to mitigate that. Like "could someone use this to co-ordinate a terrorist attack?" or "how would a stalker use this?"
And I'm like thumbs down on the bullying and harassment and the mass surveillance and a thumbs up on the mass social unrest.
@peter "Always remember you're deploying a potential tool for mass social unrest" 2 bad thngs and one neutral
@a_breakin_glass Yeah, in this toot I was thinking about the negative kind - for example the stirring up of hatred. But you're right, it's not necessarily a bad thing and could potentially be a good thing (depending on people's situations and perspectives).
@peter this is such a good thing
@peter "Always remember you're programming a potentially lethal weapon"...
@peter This. This so much.
@peter I wish I was deploying tools for mass social unrest.
@peter "Always remember you're designing a system to keep *non-technical* people safe."
@peter nothing wrong with that last one :^)
@peter Sounds like you watched a few episodes of Black Mirror in a row. :-)