Friday 30 September 2016

Theme 4 - Reflection

Both seminar groups unfortunately got cancelled this week. Since we would have mainly talked about different Quantitive methods and, from what I understood in the lecture, used them in examples, this would have been the main part on how to reflect on the first of the two questions. However, due to the cancellation of the Seminar my view on the paper and quantitive methods remains mainly intact: The benefit of quantitive methods being the ability to put the answers into numbers that then are easier to analyze and visualize and that a qualitative input could have provided further information on the usage of Facebook. And lastly, the sample group used in the paper remains questionable as it only accounts for university students that tend to be in one age group and have similar social views and habits. The only addition I have to this is, that it is not entirely clear, wether Ellison et. al. did test their quantitive questionnaire prior to gathering their final set of data.

In connection to Bergström and colleagues paper I have come to understand, that the stereotypical use of the casual dressed dark skinned character versus the formal dressed white skinned character was highly intentional as they wanted to prove that there can be a behavioral change when experiencing an illusions of ownership during Virtual Reality usage. Therefore the bigger the stereotype is, the higher the chance of proving a change in behavior was likely. Thus my critique of interpretation was inept as further research in that way would not change the proof of concept.
We discussed in class if further research in regards to my suggestion would be of interest and it was concluded that it was not as it would not lead to new knowledge. The journal where the original paper was submitted would not publish it and publishing it on another journal with a lower impact factor would understandably not be of interest to any researcher.
However, I still have to object to this to some extend: while I understand, that further research will not change the existing proof of concept, research in why this phenomena is possible and how it is triggered could be of interest to some research areas or for example game developers to improve the VR experience and the storytelling. That being said, further research does not have to be done by the same researcher. Only that after a proof of existence it might be of interest to understand why this phenomena exists, not only that it exists.

Theme 5: Design research



What is the 'empirical data' in these two papers?


The empirical data in Anders Lundströms paper is gathered using three different methods. Firstly, using a "state-of-the-art analysis" while driving all available electric cars and comparing the different methods of displaying the remaining range. This analysis further involved online researches for the user interfaces being used in the cars by looking at information from manufacturers as well as reviewers of the cars. The empirical data they found in this case is, that most manufacturers display the range as a single number. Some (BMW/Nissan) also display it in a map-based interface. Secondly, they gather data from online platforms and forums, specifically those topics that came up, when searching for the term "guess-o-meter" on Google. The data they found there was mostly, that there is a big need for understanding how the range is being calculated and what influences that range. Many online users seem to be frustrated with the sudden jumps in range depending on their style of driving or use of additional features like climate control and so on. Users of the Nissan Leaf even came up with a spreadsheet that cross referenced battery status with driving speeds and thus gave a better range estimation as the build in feature. Thirdly they conducted several interviews, leaving them with a set of data from which they concluded that drivers have come up with their own range estimation either based upon experience, or based upon the battery level gauge. The latter ones seemed to have had a better understanding on how different styles of driving as well as the use of additional features influenced the range of the car. So to summarize: their empirical data is the outcome of a state-of-the-art analysis, the findings of the analysis of online discourses and the answers to the interviews conducted.

In Ylva Fernaeus & Jakob Tholanders paper, the empirical data is gathered studying (qualitative) how children are using their tangible programming system that they build up as a workshop activity in an art gallery. They iterate this process with a low fidelity and a high fidelity prototype to gather their empirical data. This data lead them to "shift their focus from a focus on persistent representation and readability of tangible code structures, to instead focus on achieving reusability of programming resources".

- Can practical design work in itself be considered a 'knowledge contribution'?

While one can argue about the style of research and the quality of the outcome, it is from these two papers very clear, that they can be considered knowledge contribution. By trying to come up with a good practical design both of the researchers where able to pinpoint problems that exist and then come up with an improved way of tackling the problem, even if it might be a rudimentary solution at first, it can lead to further analyzing, testing and designing of the problem at hand.

- Are there any differences in design intentions within a research project, compared to design in general?

It is safe to assume that there are differences in the design intentions. In research projects one usually aims for a scientific explanation of a problem which then leads to a better understanding of it and therefore to an improved design. In general design this scientific problem (or reason for the design needing an improvement) is often not of a higher interest to the designer.
In research, the focus is on the scientific problem, whereas in general design, the focus is simply on the design itself.

- Is research in tech domains such as these ever replicable? How may we account for aspects such as time/historical setting, skills of the designers, available tools, etc?

It is highly debatable whether research in tech domains are really replicable. They evidently are replicable to some extend when done shortly after the original research. However as time progresses it becomes harder and harder to replicate the same situation as in the original research due to progress of the technology at hand. If for example one was studying Augmented reality some 20 years ago, one would have needed expensive high-tech equipment to even get the simplest representation of mixed reality whereas today smartphones come with gyroscopes, GPS, cameras, display and even the computational power necessary to process all of that input in one single device.
That being said the early studies usually are what lead to the improved technology in the first place and therefore are of utmost importance for later research.

- Are there any important differences with design driven research compared to other research practices?

The most important difference with design driven research seems to be the iteration process that always happens during design research. One never just comes up with a good design. The design has to be first tested, then refined, then tested again, then refined yet again et cetera. Other research fields usually "just" research and (re-)define the problem, do experiments on it and draw conclusions from the findings and then stop.

Monday 26 September 2016

Theme 2 - Comments

https://u1eqtjc8.blogspot.com/2016/09/theme-2-reflection.html?showComment=1474887290740#c3851414234500845336

https://u1h02pv3.blogspot.com/2016/09/reflection-on-theme-2-critical-media.html?showComment=1474898863881#c5042938395913544904

https://u10o7oqf.blogspot.com/2016/09/theme-2-critical-media-studies-part-2.html?showComment=1474900847679#c7733749200868865975

https://u1j8du7c.blogspot.com/2016/09/theme-22.html?showComment=1474902770382#c6357769574455351708


Theme 3: Reflection

After the lecture and seminar on Theme 3 it is still very hard to really come up with a good explanation to a first-year student what theory is. In the seminar we mostly agreed on it being first and foremost an "explanatory framework for an observation or logical thought" that needs to be testable. So somewhat along the lines of the dictionary explanation of the word itself. Then again, we also discussed that depending on which of the two papers you follow it becomes more unclear again. For Gregor there is something called Analysis Theory, which fits the earlier description to some extent, but then Staton & Staw say that this kind of work is undeniably not a theory. So we decided to say that the definition of theory is dependent on the field of research one is working in. That also led us to talk about Kuhns and Feyerabends work on who stated that there is no structure in how to come up with a good theory and that in general the whole process is very messy.

When we were discussing our papers and what kinds of theories they are using I was asked what theory my chosen paper is based on (not what it turned out to be). The paper itself, as stated earlier, is a mixture of Design and Action theory with a minor part of EP theory. However what it is based on is hard to tell for me - or I am unsure about the meaning behind the question.
Larsson & Moe based their research on both qualitative and quantitative research papers either stating that there is a need for more research on political involvement among internet users, or papers that did study if people are using the internet for getting political information such as news or campaign topics. Even though many of their cited papers ask for more research and the connection between the Internet use during election times and the outcome after election, Larsson & Moe only presented a method for analyzing it and showing in in practice with their case study in the 2010 Swedish general election. However, while they did find a new way of analyzing and categorizing twitter and its users, they fail to answer the questions the other papers have asked about how microblogging or the internet in general is affecting the outcome of the elections and if they are in any way connected.
If that truly answers the question what theory the paper in question is based upon, I am still unsure.

Friday 23 September 2016

Theme 4: Quantitative research

Which quantitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?What did you learn about quantitative methods from reading the paper?Which are the main methodological problems of the study? How could the use of the quantitative method or methods have been improved?
The paper in question is "Connection strategies: Social capital implications of Facebook-enabled communication practices" by Nicole B. Ellison, Charles Steinfield and Cliff Lampe from the Michigan State University, USA. Their main questions for the studies are to find out if "there are distinct patterns in communicational behaviors, which of these are more likely to predict bridging as well as bonding social capital". To achieve this, the authors first asked about the demographics as well as measuring the psychological well-being using some items from the Rosenberg self-esteem scale. They then proceeded with the actual questionnaire about the participants Facebook use. They used several questions following a 5-point agree/disagree Likert-Scale.
The benefit of using this kind of questionnaire is, that one ends up with a mathematically and statistically usable answer enabling the researcher to compare the answers better and to easier link them to for example the demographic characteristics of the interviewee and also being able to show them graphically. These methods however also limit the users in some ways as they somewhat may bias the outcome of the answer depending on the phrasing of the question. Moreover an interviewee may have more input than just the answer of the questionnaire and thus you may miss an upcoming trend which was not accounted for in the questionnaire.
The main issue I see in this paper is less with the way of questioning, but with the generation of the sample group as they are exclusively students. Moreover it is not clear how the they guarantee the randomness of the sample group. So improvement could be within enlarging or diversifying the sample group in order to also have non-students in the sample group as well as a more diversified age group as undergraduate students tend to be of a certain age range with very view exceptions.

Reflect on the key points and what you learnt by reading the text. Also, briefly discuss the questions below.
Which are the benefits and limitations of using quantitative methods?Which are the benefits and limitations of using qualitative methods?
Bergström and colleagues are analyzing how the behavior of people can be influenced depending on the virtual character that they are controlling within the immersive Virtual Reality. In particular people are put in an environment with a neutrally dressed asian virtual character that plays a base rhythm on a hand drum. The test person has to play the drum in any way he sees fit. First he plays it with only seeing a pair of neutral white hands without an attached body. After a certain time they switch to a body of either light skinned and formally dressed or dark skinned casually dressed type. Using upper body tracking they analyze how the different virtual character influences the behavior of the participant. They conclude that people controlling the dark-skinned casually dressed character changed they way used to play the drums in comparison to the neutral white situation whereas people controlling the light-skinned formally dressed character did not change the way they played the drums in comparison to the neutral white setup. While they discuss some of the other interpretations of their test, it would be interesting to see how a mainly dark-skinned instead of caucasian group would react to this experiment and if they would have the same behavioral change as the caucasian test group. Furthermore a setup where only the clothes of the people are changed would be of interest as one might argue, that it is the casual versus the formal dresscode that influences the behavior and not the skin color. Whether it is the latter one could be determined by setting up another test using a instrument that originated in caucasian regions.
The benefits of the quantitative methods are that one receives mathematically, statistically and graphically usable data that can be easily processed and interpreted. Furthermore relations between several of the factors in the quantitative questionnaire are more easily established. Qualitative research on the other hand can give more input about opinions and possible new trends that would otherwise in quantitative research go unnoticed as they have not been specifically asked for in the questionnaire.

Friday 16 September 2016

Theme 3: Research and theory



Generally speaking, theory is "A set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena." (1). In simpler terms it is an explanation to a scientific problem or question that can be tested and invested.
That being said, there is a high debate on what theory exactly is. One of Gregor's types of theory is "Analysis Theory" whose key attributes are that "the theory does not extend beyond analysis and description." and that "No causal relationships among phenomena are specified and no predictions are made". Staton and Staw however, write that lack of causal relationships and the lack of predictions are exactly what theory is not. Personally, while I see the benefit of Gregors categorization and also the importance of publications that are within this analysis category, I tend to coincide with Staton and Staws view, that good theory has to connect the facts and come up with a new answer to the problem, otherwise it is just blatant repetition of facts.

I chose Studying political microblogging: Twitter users in the 2010 Swedish election campaign by Larsson & Moe (2) out of the 2012 New Media & Society Journal, edited by Steve Jones (University of Illinois at Chicago, USA). The Journal focuses on "communication, media and cultural studies" with a focus on social media as it seems. It has an Impact Factor of 3.11. It was first published in April 1999 and is being published roughly every 4 to 8 weeks.

Larsson and Moes article studies political microblogging and the participation of Twitter users within. Their research is conducted in a case study about the Swedish general election in 2010. Larsson and Moe give a brief introduction and background about blogging, especially political blogging and microblogging, in this case, Twitter.

Through an online service Larsson and Moe collected Twitter data related only to the election topic by collecting data around the most used hashtag (#val2010). According to them, previous studies always collected data from the entire "Twittersphere" and thus being less focused on the topic at hand. They analysed that data with statistical software and illustrated it using visualization graphs. They collected data starting one month prior to the election and several days after. They found that about half of the tweets about the election were on the day of the election. However some prior spikes can be seen. Larsson and Moe linked these to other medial or political events.

They furthermore analyzed the communication between individual twitter users. Using the amount of directed tweets (directly aimed at another user using the @ sign) they categorized the users into Senders (sending out @tweets), Receivers (receiving @tweets) and Sender-Receivers (using both frequently) and thus are able to determine who is among the main drivers of the political debate. They further used retweets (forwarded messages/tweets from other users) to again put them into three categories: Retweeters (users that forward many messages), Elites (users, whose messages are popular and thus being retweet frequently) and Networkers (being retweeted a lot, as well as retweeting a lot). They found that while most users belong to the group of networkers, there are also some Elites who despite not being very active with retweets seem to have a popular opinion.

They found that most of the discussion is driven by a small amount of users. These users however are mostly public figures such as politics, journalists or well-known bloggers. Nevertheless there are also some anonymous users that according to Larsson and Moe "signal the potential for outsiders and less conventional voices to speak up via Twitter". They state however, that the overall share of twitter within the election is negligible as the "estimated number of Twitter users varies between 1 per cent and 8 per cent of internet users".

They admit that there are some limitations to their approach of collecting topic related data. They say that while the hashtag #val2010 in their case was used consistently throughout the period, some data might have slipped way due to misspellings or the hashtag being removed in ongoing discussions.

It seems that Larsson and Moes article is mostly coherent with the Design and Action theory (V.), providing the reader with a method of analyzing twitter discussions, with some EP theory (IV.) content - mainly the linking of spikes in the number of tweets with media coverage of the election and the suggestion of not only analyzing the structure of the tweets, but also the content.

By combining these two theory types Larsson and Moe are able to provide solid method for further research with a case-study at hand. Some more Explanation theory would have been interesting. For example by connecting the use of directional tweets with the use of retweets. With that and also more prediction they would have been able to establish a good theory according to Staton and Staw.


(1) American Heritage® Dictionary of the English Language, Fifth Edition. S.v. "theory." Retrieved September 16 2016 from http://www.thefreedictionary.com/theory
(2) Larsson, Anders Olof, and Hallvard Moe. "Studying political microblogging: Twitter users in the 2010 Swedish election campaign." New Media & Society 14.5 (2012): 729-747.

Wednesday 14 September 2016

Theme 2 - Reflection

In the Seminar we concluded that while Adorno and Horkheimer (A&H) were also referring to the Age of Enlightenment, they mainly referred to it as a conceptual idea (how we think). In the discussion it became clear, that the uncertainty I wrote about is a myth in itself. So in simpler terms: a myth is being enlightened and becomes myth again through ideologization. It also leads to domination. Not only through ideologization, but in means of finding new boundaries. As examples we mentioned the scaling of Mt. Everest, the Titanic or the Hindenburg Zeppelin.

A&H see Dialectic not only as a methodology for arguing (as I stated) but as a whole concept. In particular the Marxist Dialectic where we need to change the Materials in order to change society opposed to Kants Idealism where the ideas change the society. For example praising a new invention (e.g. the Internet - unlimited knowledge; bringing the world closer together etc.) and later arguing against it and its different outcome (too much trolling, bullying, racism, porn, lack of privacy etc.).

My definition of Nominalism was too brief, because I did not see the connection between it and the text. Only after we discussed Platos allegory of the cave it made sense to me. In Nominalism the Philosophers need to disconnect themselves from the real world objets and conceptualize instead. However, conceptualization and generalization become dangerous because they can be used to oppress people (See Nazi Fascist, German Communism etc.). However, A&H argue that it can also be used for good (e.g. human rights, equality, feminism etc.).

My earlier statement about myth was partially incorrect as A&H say that myth does not have to be god-like but can also refer to any idea or concept that has not been proven (enlightened) yet. A myth needs to be articulated first before it can be 'enlightened' and turned either into truth (knowledge) or be disproven.

Since I wrote my answer about super- and substructure I read an article on how the behaviour of a "phone call" changed over time. More and more people are not actually holding the phone to the side of their heads anymore, but in front of their mouths. This has to do with the technology of actually leaving short audio messages instead of having a real-time conversation. It changes the behaviour of how we "make a phone call" but also how we have a conversation. It can spread over longer time - much more like a correspondence than like a conversation.
While the substructure (the smartphone) slowly forms the superstructure (our social behaviour) the superstructure can also limit and influence the substructure: Despite most people using their phone not as a phone, they are still designed to be phones - even though "phone" is "just another app we choose to use".

When I answered the second question I did not quite see how A&H view on the revolutionary potential differed from Benjamins. Benjamin was very clear about it's potential. He used Photography and Film as examples enabling you to see things in a new way (See Muybridge's The Horse in Motion, 1878). Having witnessed the second world war and after fleeing to America A&H expected this revolutionary lifestyle but realized, that american capitalism was as oppressing and deceiving as nazi fascism.

It seems I confused Benjamins views on historically determined perception: I wrote as an example that an object (Iceberg) is seen very differently depending on your prior historical and cultural knowledge (pre Titanic to post Titanic) whereas Benjamin seemed to mean how the same object is portrayed over time. Examples mentioned in the Seminar where paintings of Jesus during the romantic age, that showed him wearing swedish folklore cloth.
His arguments aimed at the german fascist view on art. He argues there cannot be objectively good art. It always depends on ones taste.

Benjamin is fairly disambiguous about aura. On the one hand he states any copy of art will not have the same aura as the original, on the other hand he argues that by multiplying art one minimizes the privileges of art owners (rich, bohemian people) and that it therefore can be a liberating and revolutionary thing.

Monday 12 September 2016

Theme 1 - Reflection

After the lecture and especially after the seminar it became clear, that while Platos text never truly introduced a new concept, his statement that we perceive through the senses and not with them, can widely be seen as an early form of empiricism. He also uses a concept of Dialectic to disprove Theaethetus definitions of knowledge by confronting his hypotheses with a counter statement and thus nullifying them.
Plato does however write that what our senses perceive has to be somehow compared to our knowledge inside our soul. One could argue that this is an early and rudimentary version of Kants idea about the faculties of knowledge and how everything we perceive has to be organized in these categories.
"perception without conception is blind"
This means if we were only to receive the signals from our senses, but don't make any sense of them by organizing them (or comparing them to what we knew beforehand) we cannot have (or gain) any knowledge.
Kant however does not agree with the concept of empiricism. He argues, that if that would be the truth, it would have been impossible for Copernicus to discover his model of the universe. Kant asks how it could be possible to have synthetic knowledge a priori.
This however also means that knowledge presupposes knowledge as without any prior knowledge, of for example the categories, we cannot categorize it. Therefore knowledge can never be "pure" as it is dependent on any prior knowledge as well as cultural, historical or language background.
We also discussed that Kant was apparently misinterpreted by most people as a critical or formal idealist which lead to him writing a second book in which he corrected this interpretation and made clear, that he meant his work to be of skeptical or empirical idealism.

Friday 9 September 2016

Theme 2: Critical media studies

Dialectic of Enlightenment

1. What is "Enlightenment"?
Enlightenment refers to the Age of Enlightenment, a "philosophical movement of the 1700s that emphasized the use of reason to scrutinize previously accepted doctrines and traditions and that brought about many humanitarian reforms." (1) Adorno and Horkheimer refer to it as "the advance of thought" which arguably could be seen as a definition of knowledge. They furthermore say that it "is mythical fear radicalized" and that "Enlightenment [...] left nothing of metaphysics behind except the abstract fear of the collective from which it had sprung." They say that the Enlightenment and its answers to metaphysical questions have left the people with an uncertainty to the meaning of life and therefore made room for ideological views and beliefs.

2. What is "Dialectic"?
Adorno and Horkheimer state "The concept, usually defined as the unity of the features of what it subsumes, was rather, from the first, a product of dialectical thinking, in which each thing is what it is only by becoming what it is not." they further write that "dialectic discloses each image as script. It teaches us to read from its features the admission of falseness which cancels its power and hands it over to truth." This describes dialectic as a method for finding the truth by confronting a statement with specific counter statements or negations of the statement.
 
3. What is "Nominalism" and why is it an important concept in the text?
The American Heritage Dictionary defines Nominalism as "the doctrine holding that abstract concepts, general terms, or universals have no independent existence but exist only as names." (2) Therefore they exist only in the mind and are a concept of knowledge. In coherence to the text it is an important concept to overthrow any myth. 

4. What is the meaning and function of "myth" in Adorno and Horkheimer's argument?
Adorno and Horkheimer use the term "myth" for any unanswered metaphysical questions as well as anything unknown or untruth.



"The Work of Art in the Age of Technical Reproductivity"

1. In the beginning of the essay, Benjamin talks about the relation between "superstructure" and "substructure" in the capitalist order of production. What do the concepts "superstructure" and "substructure" mean in this context and what is the point of analyzing cultural production from a Marxist perspective?
Substructure refers to anything directly related to production such as machines, factories, tools and alike. Superstructure refers to everything else. For example Art, Religion, Family, Politics and so forth. Benjamin sees the substructure as a vital component for the rising and thriving of the superstructure.

2. Does culture have revolutionary potentials (according to Benjamin)? If so, describe these potentials. Does Benjamin's perspective differ from the perspective of Adorno & Horkheimer in this regard?
Benjamin states the superstructural concepts are "useful for the formulation of revolutionary demands in the politics of art." He mentions photography as one of these potentials. A painting is always a perception of the painter and how he sees the world he is painting, whereas a photograph can capture a piece of reality in an instant of time.


3. Benjamin discusses how people perceive the world through the senses and argues that this perception can be both naturally and historically determined. What does this mean? Give some examples of historically determined perception (from Benjamin's essay and/or other contexts).
Our senses only forward raw data to our mind. Only there we can compare this data with the former and present experience (knowledge) and associate a meaning to it. This however means it is dependent on our prior knowledge of history and society. Therefore the same input, lets say a picture of an iceberg, has a completely different significance for a person who lived before 1912 than for a person who lives after 1912.


4. What does Benjamin mean by the term "aura"? Are there different kinds of aura in natural objects compared to art objects?
Aura is being described by Benjamin as "the unique phenomenon of a distance, however close it may be". He uses the outline of a mountain range on the horizon as an example. When you see this outline you feel the mountains aura. He further states that the aura of works of art "is never entirely separated from its ritual function. In other words, the unique value of the “authentic” work of art has its basis in ritual, the location of its original use value." Therefore any copy of a work of art can never have the same aura as the original piece.

(1)American Heritage® Dictionary of the English Language, Fifth Edition. S.v. "enlightenment." Retrieved September 9 2016 from http://www.thefreedictionary.com/enlightenment
(2)American Heritage® Dictionary of the English Language, Fifth Edition. S.v. "nominalism." Retrieved September 9 2016 from http://www.thefreedictionary.com/nominalism

Friday 2 September 2016

Theme 1: Theory of knowledge and theory of science


In the preface to the second edition of "Critique of Pure Reason" (page B xvi) Kant says: "Thus far it has been assumed that all our cognition must conform to objects. On that presupposition, however, all our attempts to establish something about them a priori, by means of concepts through which our cognition would be expanded, have come to nothing. Let us, therefore, try to find out by experiment whether we shall not make better progress in the problems of metaphysics if we assume that objects must conform to our cognition." How are we to understand this?

Kant states, that the apparent view at the time is, that any knowledge can only be gained by experiencing them through our senses and that they need to comply with our perception of the world. He further states that if you were to follow that definition, any idea, theory or assumption one comes up with is immediately void again as it did not origin through experience and therefore does not comply with said definition. Kant suggests to somewhat overturn that definition and instead of taking our observations as the truth, rather proof our knowledge and assumptions towards our observation of the world and see if this makes it easier to answer the questions in metaphysics. In simpler terms instead of taking everything we see as the sole truth, see if we can come up with answers to metaphysical questions and prove these questions by examining our surroundings and through experiments.
As an example he takes Copernicus who broke with the view at the time that the universe is revolving around the earth. And only by doing so and assuming something which he so far has not been able to observe or experience he was able to come up with the notion that in fact the earth is turning and the universe stands still.
To Conclude his statement: In order to gain knowledge we should prove theories and assumptions against our observation, not against what so far is known as the truth for it might not longer be the truth.
At the end of the discussion of the definition "Knowledge is perception", Socrates argues that we do not see and hear "with" the eyes and the ears, but "through" the eyes and the ears. How are we to understand this? And in what way is it correct to say that Socrates argument is directed towards what we in modern terms call "empiricism"?

In the discussion that follows the above statement, Socrates explains to Theaetetus, or rather lets himself discover through a variety of statements and questions and by counter proofing any statements by telling examples and comparing them to other statements, that what he hears and sees are simple sensations that reach his mind and soul through organs and that only through comparison in the mind they are properly defined. He further states that these sensations are given to men and animal at birth but their meaning can only be learned through "education and long experience". (1) This means our senses are mere instruments for the brain to utilize. As Theaetetus states we perceive the world around us through our senses but it needs the brains analysation to characterize them.
To compare this view to the modern term "empiricism" we first need to look at the definition of said term: According to the American Heritage Dictionary of the English Language empiricism is
"The view that experience, especially of the senses, is the only source of knowledge." (2) While this already seems to be in accordance with Socrates view that only through experience and education knowledge is gained, it does not quite account for the process behind the perception of the senses to them become knowledge by comparison. Colins English dictionary definition of empiricism, which is "the doctrine that all knowledge of matters of fact derives from experience and that the mind is not furnished with a set of concepts in advance of experience." (3), seems to better account for Socrates view, that while anyone is given those senses at birth, only "through comparison with past and future things in the soul" they will transform to knowledge. While it slightly depends on the definition of empiricism how sufficient Socrates argument is as an example of it, when looking at them as a whole it becomes quite clear that the argument is indeed directed towards said term.



(1) Plato, Theaetetus; translated by Jowett, Benjamin; Project Gutenberg, web, published 01.04.1999
(2) American Heritage® Dictionary of the English Language, Fifth Edition. Copyright © 2011 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved.
(3) 
Collins English Dictionary – Complete and Unabridged, 12th Edition 2014 © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014