Monitoring of individual data
- Improve Quality of Life
- Recommendation algorithms
- Improve Quality of Life
- Privacy concerns
- Security concerns
Facebook - Cambridge Analytica
- Facebook gave more information about a person to an app
- Donald Trump campaign
- Ted Cruise campaign
What are some general examples of how dataveillance can be used unethically? Are there any ethical applications of dataveillance? When may it be an ethical decision and why?
- Influencing elections, manipulating public opinions.
- Breach of privacy, examples: selling data to third parties
- Unethical if the users do not have a reasonable expectation that their data will be sold.
- Surveillance of people: can unfairly target a specific group of people by seeing their social media habits. Grounds for discrimination.
- Methods of data collections:
- Always on mic and other discrete forms of dataveillance. Example: Facebook messenger had access to phone microphone and could pick out keywords of what was being said to serve you targeted ads. Users do not expect this, so the harvesting of this kind of data is unethical.
- Unclear Terms of Service, putting things in just to cover themselves. The most important things are often not at the very front.
- "Accept all cookies" button
- Background searches:
- Companies can deny an applicant further advancement due to their affiliations on social media (this may be warranted or unwarranted).
- Social media may not give a fair image of who you are
- Providing a better user experience to users, recommendation algorithms.
- Targeted advertising: can give users options that they are most likely to want and gives them consumer options.
- Surveillance of crime, allows law enforcement to catch criminals and enforce the law.
- Methods of data collections:
- Data such as search history, app usage, browsing habits the user expects to give over to the website. So if dataveillance occurs on a set of data, we are responsible for it.
- Background searches:
- Working with sensitive data, or the work confidential. You want to make sure that potential applicants are suitable.
Is the fine-grained marketing enabled by dataveillance unethical compared to traditional marketing techniques? Are they both unethical?
- Fine grained:
- If you are going to see an ad, it is more advantageous that you see an ad that is relevant to you.
- Targeted ads can cross over into manipulation.
- Political advertising can enforce confirmation bias. Can you a less informed voter by not exposing you to ideas from each side.
- Some ads will not be served to users based on their psychographic profile. Ties into the previous idea of not getting a wide range of ideas. Reinforces views that you already hold. ECHO CHAMBER
What were the ethical considerations of both Facebook and Cambridge Analytica during the scandal? What ethical flaws were both parties demonstrating? (Blindness, Negligence, Recklessness, Incompetence)
- Incompetent in stopping Cambridge Analytica
- Shadow profiles - Reckless to store information of users that didn't comply
- Misleading with what they did with the data
- Suggestive targeted ads which are not obviously ads
- Recklessness, Exploitation
How were other parties (Not Cambridge Analytica or Facebook) affected by unethical behaviour?
- Survey responders
- Misled on what the data would be used for
- Friends of responders to the survey
- Breach of privacy
- Democratic party (ditto for the EU)
- Donald Trump campaign used the data for targeted ads
Did Facebook users have a reasonable expectation that their data would be used by Cambridge Analytica? Is it ethical for Facebook to demand users relinquish their privacy in return for using Facebook
- using a service, outlines service
- scraping friends, 6 degrees of connection
- wasn't facebook, reasonable as long as it is outlined, if you use a service on facebook. agree
- depends on user, choose how much information you share.
- make data off ads
- know what its collecting, be able to opt out
- listen to voice recordings
- as long as you know what information it is,
What are 'shadow users' and 'shadow profiles'? Is it ethical to store data about shadow users?
Shadow users - users who don't have an account, but data related to them has been stored
- No, you had no choice. should not be collecting data
- As long as you have the chocie, informed consent.
- your right to know whats on the platform about you
- allow user to
- options and choice
- depends on intent
Does Facebook have an ethical responsibility to ensure that the data that they share with others is not misused? Should the third parties themselves be held to a higher standard? Who holds ethical responsibility for the data of users?
- both has the responsibility
- how will they enforce
- terms and conditions
- whats to stop
- AI that detects hate speech, monitoring, definitely facebook's responsibility
- still need more
- users themselves have some responsibility
Did the lack of regulations around dataveillance lead to the scandal?
"Those who downloaded the app voluntarily turned over reams of personal data about what they like, where they live, and in some cases, depending on individual privacy settings, who their friends were."
- (Too many access scopes?)
- Be more transparent with what's actually happening when authorising a Facebook account
What are some ethical guidelines that Facebook could implement to stop abuse of data?
- Each type of data needs a reason/explanation to be requested
- Regulating every single developer (but this is not scalable)
- Even if people were told that their data might be given to another 3rd party, they wouldn't care too much about it unless they are paranoid
What was the aim of Cambridge Analytica from using this data?
- Predominantly for political advertising
- Used in other political campaigns
- i.e. India 2010 election
- i.e. Kenya 2013 election
- i.e. Kenya 2017 election
- i.e. Malta 2013 election
Is it ethical to build psychographic profiles of people if they willingly gave out their data?
- Mostly ethical if consent is given, but people need to be aware what the intention of building psychographic profiles is
- Could be used maliciously
- Comparable to google drive where you can give consent to other apps like Draw.io to store in google drive
Was Facebook's initial response to Cambridge Analytica ethical? What other ethical decisions could Facebook have made to prevent or deal with the scandal?
- Zuck took 5 days to respond, and when he did it was a CNN interview, where he did not really reveal the strategies that would take place to remedy the issue, and did not state the effectiveness of what Facebook might do.
- Not ethical, manipulative in the way that he waited 5 days for things to kind of simmer down and for people to have less anger towards the issue.
- He should have restricted data access for companies, the spreading of your profile should be a matter of consent, but CA did also pull friends profiles which allowed them to scape millions of American Facebook profiles. Facebook also needed to be way more transparent to the users about sharing their data, and how their data might be used.
- And obviously he should have replied sooner and not be secretive, and to be more transparent with even the methods that they would use to remedy this issue.
Should Facebook continue to allow dataveillance of their users? Justify why they should or should not according to ethical principles.
- The problem is Dataveillance is the core of their model. It allows them to sell personal data to advertising companies, but from a consumer perspective, their model is based around using data to "better serve" their users.
- If they collect the right amount of data and use it in the right way, then it is justified as long as they are transparent with their users
- There is still the concern with very young people being able to give consent for themselves and using the platform, there must be more self-regulation around this. It's legal but not the best practice.