When I moved to this country, the many differences that were striking right away were not only connected to the people-, but also tech-related. In fact, coming to the US was followed by a number of thoughts about computers and their everyday usage. Living here is much more technologically based, which, I believe, makes life a lot easier and more convenient. However, will the current state of human control over automatic innovations remain unchanged? Is it possible for people to keep control of the world over the next hundreds of years? Are we able to maintain the balance between using technology and thinking independently? It seems that the answer to each of these questions is “not quite”.
Computer Science allows us not to worry much about a lot of services on campus. Paying automatically for the laundry (of course, if we have enough money…), counting how many meals are left in the dining hall or opening the correct lock at the post office when we receive mail — everything just because of one swipe of our RU ID card. Those examples are only taken from the Rutgers community, but potential of Information Technology reaches so far further than New Brunswick area, and I’m sure everyone is aware of that. Online banking, tools like Spotify, social applications such as Facebook, Skype and video calls and maaaaaaany more… all of them are possible now but they are things that people 50 years ago would have never dreamed of, or, at least consider improbable in their near future. My grandmothers’ prediction was that in 100 years people would see each other “over the hills and far away”. It became a fact just after 30.
Progress of Computer Science is a thing that we are not, and never will, be able to stop. This fact is both fascinating and slightly terrifying. Recently I’ve read a couple of articles about Artificial Intelligence and its evolution. It’s interesting that starting from the middle of the XXth century, with every year we are getting closer to loosing our “most intelligent species on the planet” status in favour of computers. Yesterday (around 3 decades ago) first cellphones, so different from the ones we are used to now, started to become popular. Today our life is more computerised that we even know, and machines are much more “smarter” (how can this word characterise something that isn’t human!) than we think. The case with Russian chess-master Garry Kasparov, who lost a game to a computer is well known, but how many of you were conscious that in March this year, one of the nominees to a Japanese literature prize was a book co-written by a robot — “Konpyuta ga shosetsu wo kaku hi”? Humans still have control over the intelligence of machines, but scientists estimate that around year 2030 their intelligence will become indistinguishable from the one that people have. Progress in computer science field tends to be more and more faster, and the question is … what will happen tomorrow?
What is striking to me, after becoming interested in Artificial Intelligence, is the importance of performing some actions on our own. It is incredibly easy to allow all the applications that we have downloaded to do our job. Automatic reminders will never let us down and forget about the vaccination that we have to register for, PhotoMath will calculate mathematical equations, calendar suggestions will tell us when is it the best time to do something and how we should do it.
There is nothing wrong with using these tools. What worries me, however, is the gradual lowering of the human participation level in performing even the very simple tasks. My Math teacher from high school would go crazy if she had seen someone older than seven years of age using a calculator to obtain 49 x 4. Again, the problem is not that we use the tools that we have handy to make our lives simpler. My point is about the dangerous over-reliance on the technology.
I believe than with making life more and more automatised, humanity reaches the point of no return. Both in terms of brain supremacy — because eventually robots will become more intelligent — but also mentally: our trust in computers becomes so big, that we do not even consider the accuracy of what we see on our screens. I’m far from entertaining visions known from Science-Fiction movies, such as robots taking control over the world and trapping people in prison. I’m only saying that we are so used to the correctness of what the computers provide us with, that there is no will to think if it really is like that. Automatic acceptance of the accuracy of what we see may weaken the diligence and skill of reasoning. Technological progress may result in a regression in independent, critical thinking.
What makes me wonder is also how can we use our brain to work efficiently on a very complex task, if we are not willing to complete the simplest ones — such as multiplication. Will this case the same as with Maslow pyramid? Can we perform tasks requiring very advanced brain usage without doing the ones that are way simpler?
Fortunately, at Rutgers, most people do not have a problem with overusing technology. I’m trying to take advantage of the most of the Computer Science possibilities here -because that’s the discipline I want to major in – and meet a lot of awesome people who help me to do so. Apart from my class, I also attend the biggest CS club — called “USACS” — and the events that they organise, such as hackaton “Hack RU”. I’m very happy to have a possibility to learn about a range of topics, from Databases to Github usage and programming applications. My long term-goal is to do research in Artificial Intelligence and have an impact in developing this branch of science.
I leave all those questions without an answer just to make you think, and hope that my thoughts were not too disturbing or depressing. I’m a very huge fan of technology with all its uses, as well as concepts such as Sharing Economy and the Internet of Things. I only believe that in the era where everything becomes consistently more and more automatic and requires less and less human participation, it’s worth it to sometimes stop for a while and think.