Thursday, November 5, 2015

Project 3 Outline

In this post I will show you my outline for project 3, in which I will be writing a listicle about my opinion on AI development.
Wellness GM. "Typing on Keyboard- Computer Keyboard" 01/09/15 via Flickr.
Attribution-ShareAlike 2.0 Generic 
Introduction:
Connect the Issue to Your Audience's World View- This is the best approach to use because my topic is actually pretty relatable in today's society. Everyone wants their phone or smart device to do more things and wants life in general to be easier and more convenient. 
  • Start with a hook- discuss today's technology and how amazing it already is (mostly things that are in the public market, but mention technology that will be very soon).
  • Move on to how this technology can be improved (and will be improved in the future).
  • Talk about how people don't want AI technology developed and a quick reason why they are wrong
  • Transition to main part of article
Body:
Major Supporting Arguments:
  • Medical AI can help everyone or their loved ones at some point in their life
  • AI is being used for good causes
  • AI is already relatively everywhere within society 
  • AI technology will eventually make the hardships of life much easier to handle
  • AI has already helped many people the way it is currently
Major Criticisms:
  • AI is a ploy by the government to violate our privacy
  • We can't control AI well enough, will lead to AI overthrowing humans
  • Waste of money
  • Society will become lazy/won't appreciate Earth
Key Support and Rebuttal Points:
  • AI developers already understand this concern and are not looking to invade privacy 
  • AI will not be difficult to control once we figure out the working of its systems
  • AI development is funded mostly by independent funding agencies. However, it is not a waste of money because AI has the potential to save lives.
  • AI is going to be more focused on convenience- not necessarily like how we see in the dystopian sci-fi films. 
Topic Sentences:

  • Today, everyone is concerned with who is in their business and why. This isn't any different from AI technology which is said to have the potential to invade everyone's private life and spread it to unwanted places. However, the developers understand this concern and are already looking for ways to keep the user of AI in control, not the technology itself. 
  • So, who is to say that AI won't go rouge and destroy humankind? Well many scientists, including Alan Winfield, claim that the technology they develop, while complex, won't be difficult to control because the scientists will already know the ins-and-outs of what they produce.
  • People usually don't consider things like first-aid kits or medicine a bad purchase, so why should AI be any different? AI is not only pretty cool with what it can do to make our life easier- but it can also save lives. 
  • When we first think of robots, what comes to mind? I-Robot, Terminator, Wall-E, maybe the Roomba? Either way, the AI of the future won't necessarily be like how we see in the films. Instead, it will be simply focused on convenience and safety. 

Evidence:


  • Source- AI developers have good intentions, especially when it comes to the privacy of the consumer. 


  • We've been working with systems that can figure out exactly what information they would best need to provide the best service for a population of users, and at the same time then limit the [privacy] incursion on any particular user


  • Source- AI is not going to turn out how we see in common media and will be much easier to control than we think


    • Part of the problem is that the term "artificial intelligence" itself is a misnomer. AI is neither artificial, nor all that intelligent. As any food chemist will tell you, beyond the trivial commonsense definition, the distinction between natural and artificial is arbitrary at best, but more often than not, ideologically motivated. AI isn't artificial, simply because we, natural creatures that we are, make it.
     Source- AI is already controlled by humans because we decide what technology does and doesn't know.
  • I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.

  • Source- AI will be used for good: an example being that this technology is already being used to save lives.
    • Within seconds, she had the name of another drug that had worked in comparable cases. “It gives you access to data, and data is king,” Mariwalla says of Modernizing Medicine. “It’s been very helpful, especially in clinically challenging situations.

     Map of my Argument:


    Conclusion:
    I think that leaving the audience with the Negative Consequences of not allowing AI to develop would be a good ending thought. A listicle's conclusion is very short, so I will simply leave them with the thought that the human race will never be able to develop further if we do not take advantage of this new technology.

    1 comment:

    1. I honestly couldn't really find any issues with your outline. Your introduction seems well planned out in terms of framing the issue and leading in with the public's current view. Your body paragraphs seem abundant in specific evidence, context, and evaluation. And your conclusion seems to be a good choice, leaving the audience with the negative consequences will give them that last impression that will keep them on your side of the argument. Good job!

      ReplyDelete