http://www.bbc.co.uk/news/technology-39130826
- Users will be able to limit what they see from certain types of account, particularly ones with no profile picture or information
- The company said it would act only on accounts that its computer algorithms had thrown up as being abusive
- There has been mounting pressure on social media firms to deal with the growth of extremist propaganda, fake news and harassment on their platforms
- Nick Thomas, an analyst at research firm Ovum, said: "Given that cleaning up Twitter is imperative if it is to attract more advertisers to the site, there is pressure not just to act, but to be seen to act - which may explain why they are making multiple announcements."

This idea of 'being seen to act' which Nick Thomas mentions is an interesting one. Twitter and Facebook mainly have been seen prominently in the news about tackling the issues of fake news and extremist propaganda, but so far have done nothing to counteract this; from this we can see that the institution is giving the audience what they want to hear but not actually acting upon this to do anything in the short term. The drastic rise of fake news has therefore made it difficult to trust online news sources; algorithms that are to be implemented make it difficult to actually be fully aware of what is abuse and what is merely 'banter.' The problem is who decides whether a comment is abusive or not, therefore ethical concerns and social concerns are brought about from this as every individual is different.
No comments:
Post a Comment