Tuesday, March 10, 2009

IT 695 - Week 8 - The Digital Divide

Reading: Moser, M. A. (2009).Text "Superpowers": A study of computers in homeless shelters. Science, Technology, & Human Values. 000, 1-36.

This article details findings of ethnographic research on the effect of computers in homeless shelters. Interviews were conducted over the course of a year with 42 homeless people and staff from three homeless shelters in Calgary Canada. The author’s research design was based on Dorothy Smith’s institutional ethnography, which investigates structures through the actions of individuals and their effect on the structure. Through the interviews, Moser’s aim was “to bring different experiences into proximity so that we can explore the ways that individual experiences (in this case, computer use) correspond with macrosocial discourses (in this case, surrounding the Smart Communities program).”

As a result of a government initiative called Connect Calgary, computers were installed in three homeless shelters for use by both homeless clients and caseworkers. The aim of the program was to provide internet access and training for homeless people in an effort to facilitate their employment and improvement in circumstances. What Moser found was that while this initiative did not necessarily have the effect it was intended, it did have a positive effect for the clients who took advantage of the labs. First, she was surprised to find that most of the homeless users of the lab were computer literate already, possibly because those who were not computer literate did not choose to use the labs.

Use of the labs tended to fall into one of four areas: Entertainment to fill up time, maintaining a place in communities, contact with support systems, and “crossing over” or transitioning from being homeless. Several factors played a part in the use of computers, including placement of the lab in the facility and the relative social status from use.

Moser also looked at the use of computers by the staff, which was mostly in two ways: entering data on clients and looking for services to help them. Some of the intended uses of the computer, including centralized file keeping, were not embraced, possibly because shelters were somewhat territorial about sharing their resources and methods.

While the computers may not have been used exactly as the initiative envisioned, Moser’s findings indicate that they helped create a positive attitude shift and added stability to the otherwise unpredictable circumstances of the homeless users.


Thursday, February 26, 2009

IT 695 - Week 7 - Design-Based Research

Reading: Hoadley, C. (2004). Methodological alignment in design-based research. Educational Psychologist. 39:4, 203-212.

In this article, Hoadley explains what design-based research is, why it is particularly applicable in educational research, and gives an example from his own work with computer-mediated discussion tools.

Unlike scientific research that occurs in controlled lab settings, design-based research is all about context. Hoadley suggests that the context of the study and the history/experience of the participants must be continuously considered, evaluated, and incorporated in the evolution of the research. He identifies 4 important factors in design-based research: 1) the participants & implementers have a close relationship, which "blurs" objectivity; 2) tentative generalizations don't imply universality; 3) start with planned comparisons, but be prepared to follow where the research leads; and 4) researchers document design, rationale, and the changes in both over time.

Design-based research is strong at helping connect interventions to outcomes through mechanisms and can lead to better alignment between theory, treatments, and measurement than experimental research in complex realistic settings like the classroom.

Rather than measurement validity, Hoadley states that systemic validity is the real goal of design-based research. We want to create research that leads to inferences which "inform the questions that motivated the research in the first place." And because the classroom is not a lab, that research must consider the context of outside culture, participant history & interaction, and be willing to change tactics or follow developments where they lead.

As an example, Hoadley describes research done with a computer-based discussion tool, used within the context of a middle school science classroom. When they started out, the internet was new and just beginning to be widely used. Research began with a tool called Multimedia Forum Kiosk that was bound to a single machine. After a time they switched to a tool called SpeakEasy that performed the same functions online. Hoadley details how the tool was implemented, how the teacher used the tool in the classroom, and strategies that were developed to encourage or discourage patterns of use. Over time, use of the tool led to new questions, such as the motivation behind anonymous postings. By altering the tool's settings and polling participants they were able to make discoveries about that motivation that not only would have taken longer in another setting, but might not even have emerged as a phenomenon to be considered.

The design of this research included both technical and social elements, both of which Hoadley sees as crucial to the questions being examined. Instead of being an objective controller, the researcher must be involved in those contexts in order to observe interaction between design and enactment. Tentative generalizations aren't considered universal, but rather inspiration for further iterations that may lead to more universal understanding.

IT 695 - Week 6 - Prior Knowledge

Reading: Schwartz, D. L., Sears, D., & Chang, J. (2007). Reconsidering prior knowledge. In M.C. Lovett & P. Shah (Eds.) Thinking With Data (pp. 319-344). New York: Erlbaum.

One of Kirsch’s assumptions from last week’s article was this: history matters. According to Schwartz et. al, history – or rather prior knowledge – is one of the most important aspects of learning. The authors state that “people learn by building on prior knowledge and abilities.” However, they suggest that most instruction focuses on Vygotsky’s zone of proximal development, and aims to have students complete complicated tasks with help, in order to later on be able to complete like tasks independently. But if a student is poorly prepared for the lesson or has no informal knowledge, learning becomes much more difficult.

Teachers assume that their students come to them knowing certain things, though not always the things they are expected to know. The example in the article was of middle school students’ concept of random sampling; some of the students understood the concept, but when a sample was put into a specific context (a fun booth at a carnival), their ideas for sampling became decidedly un-random. Their intuitions were to sample people who might be interested in their booth, rather than a truly random sample. Intuition is different from mastery, and the authors suggest that a mastery approach to teaching focuses on developing “relevant prior knowledge.”

I found the ideas about learning as reconciliation interesting. Plato’s learning paradox is defined: people cannot learn something if they don’t already know it in some form, so is new knowledge even a possibility? The authors suggest that incommensurables are the key; people are uncomfortable because something is outside their understanding, and they want to correct that. Good instruction 1) points out the incommensurables, 2) provides motivation for reconciling them, and 3) finally, provides the solution for comparison. Prior knowledge is vital because it is what defines the incommensurables.

Thursday, February 12, 2009

IT 695 - Week 5 - No Brain Is An Island

Distributed cognition. I sort of understand the concept, but I found the articles this week very difficult to plow through. I felt like Uma Thurman in some movie (I can't remember which one), when she said, "I'm reading 2 books. This one, and a dictionary so I can understand this one." But instead of a dictionary, I found myself googling again and again; I swear, even Keen would have been using Wikipedia to try to make sense of some of these articles! Basically, what I get is that cognitive processes are not completely internal. The process involves not just what's going through our heads, but also our environment, the social context and artifacts, our perception, our history.

In his article Distributed cognition: A methodological note, David Kirsch identifed six assumptions that guided his research:

1. We act locally and are closely coupled to our local environments.
2. We externalize thought and intention to harness external sources of cognitive power.
3. Economic metrics have a place in evaluating distributed systems, but they must be complemented with studies of computational complexity, descriptive complexity and new metrics yet to be defined.
4. The best metrics apply at many levels of analysis.
5. Coordination is the glue of distributed cognition and it occurs at all levels of analysis.
6. History matters.


He goes on to explain these assumptions in depth. Closely coupled entities are defined such that a change in one leads to a change in the other, and so on back and forth. Actions may be pragmatic or epistemic, but often are some combination of the two. Kirsch's example described people playing Tetris. They must think about how the piece will fit into the available space, but often this thinking aided by the pragmatic action of rotating the piece repeatedly.

Economic metrics are explained using the example of 2 coffee houses performing the same services but employing 2 different methods - one writes the customer's order on the paper cup, one uses a computerized ordering system. In this example, Kirsch describes how the routines of each coffee house work in terms of speed accuracy, error type vs. frequency, error recovery rate, variance, learnability, and drink complexity. There are literally countless variables that affect whether a routine works or doesn't work, and only by observing the actions and tying them to cognition and computational analyses can a system be measured or explained.

Coordination seems to be the key, since any distributed system is only as good as the combined function of its parts. Kirsch states, "In distributed systems the success of the whole depends equally on all these acts of local choice adding up, working together to move the system closer toward system goals." He strongly advocates the use of simulation and modeling to improve systems, with some cautions about relying too heavily on them. History is invaluable as well, because it is virtually impossible for the elements at work in a system to be without history that has helped changed and define them.

Wednesday, February 11, 2009

IT 695 Week 4 (Pt.3)

Study I Found:


Walraven, Amber, Brand-Gruwel, Saskia, & Boshuizen, Henny P. A. (2009). How students evaluate information and sources when searching the World Wide Web for information. Computers & Education. 52, 234-246.


Participants:


23 ninth grade students in the Netherlands; 8 male, 15 female

Method:


Students completed a questionnaire to determine their beliefs for and comfort using the web
The students were then asked to complete 4-6 of the tasks while thinking aloud; video cameras and screen capture software were employed. Questions were "fourth level" tasks created by their teachers, requiring students to perform higher level thinking. Usable key words were not given in the questions, and the number of responses was not specified.

The transcript for each task was provided to students in a second meeting, and they were asked to report why they had used or rejected certain sites. Transcripts were also coded and interpreted for constituent skills (define problem, search information, scan information, process information, and organize/present information). These were also broken down into subskills.

Findings:


Students spent 44% of their time searching, 31% scanning information, 16% processing, and 9% organizing information. Students switched frequently between constituent skills and followed a cycle of "search-scan-process-organize-search" in which the searching and scanning phases consisted of visiting multiple sites. Additionally, students did not spend time evaluating the sources for information, focusing instead on speed (slow-loading sites were quickly rejected) and content.

Interpretation:


I was not surprised by the findings that students spend most of their time searching for information and significantly less processing what they find. We use the internet to find answers to questions, and (I think someone in class said this) if we find the right answer, does it really matter where we found it? If the question is "How do I fix the password I screwed up in Ubuntu", then probably the source is unimportant as long as I get the password fixed. But for research questions and information problems, in many cases it really does matter. The question that I struggle with is how to teach my students quick and useful criteria for considering sources even before they scan the content. And, given the findings here, quick is one of the important factors!

IT 695 Week 4 (Pt.2)

Assigned Article:


Faulkner, Xristine, & Culwin, Fintan (2005). When fingers do the talking: a study of text messaging. Interacting with Computers. 17, 167-185.


Participants:


Survey: 565 Participants (440 London University Students, 125 cell phone shoppers), 256 female, 298 male, 11 undetermined
Diary Study: 24 Participants (23 students, 1 tutor - participation was part of course work) 3 female, 21 male

Method:


Questionnaires were distributed on campus and at a local cell phone shop. Participants were asked to report the number of text messages they sent/received and the purpose of those messages. They were also asked to report their preferred method of communication for various situations in order to rank their likelihood of texting. Diaries were completed by students in a computer studies course; details for all text messages (incoming and outgoing) were recorded for 2 weeks. Messages recorded in diaries were classified according to categories and entered into a web database.

Findings:


Survey: Texting is used more by younger participants and females, and their data suggests activity declines with age. Women were more likely to use texting for social purposes, men for business. Some situations were deemed appropriate for texting (questions, rendezvous, jokes), some were not (asking someone on a first date, offering condolences), though younger participants were less strict about this.
Diaries: Textish (words with vowels removed) was used less than the researchers expected (196 of 337 messages contained no textish). Of 15 identified categories, those with most messaging were questions, personal information, and signoff. Advertisements and commercial information were the least reported, but were the longest messages. The most popular days for messaging were Saturday, Sunday, and Friday, but not necessarily for planning social activities.

Interpretation:


I agree that this study was just to baseline text usage, not to determine educational use. I do think that since the study in 2004, the use of texting has changed dramatically. The study found the average number of messages daily was 5-6, but my teenage son sends/receives that many messages in 5 minutes. One of the purposes of the study was to determine texting's place in the hierarchy of methods of communication, and I believe that in the last few years it has far outpaced the growth of other mediums. I would be extremely interested to see the results of a similar study done today.

Thursday, February 5, 2009

IT 695 - Week 4

Ok, with no real reading assigned this week (other than examining a research study - more on that later), I found myself in a serious quandry about what to write. In my other IT class we read more Shirky, and again I was struck by how much his ideas made sense to me. So, I did a little googling and found a blog from Shirky himself on one of Keen's favorites: Citizendium.

In this blog entry, which was archived on http://www.corante.com/, Shirky defines 3 beliefs that drive the creation of Citizendium: 1) There are experts and we can be made to recognize who they are, 2) Something created by giving experts special treatment will be better than something creating without doing so, and 3) Once we know who the experts are, we will of course defer to them and rarely if ever disagree. Shirky says all 3 of these beliefs are wrong. Again, the key to this argument is bound up in the definition of "expert", the social fact. How do we know who is an expert? Usually they are decreed so by some institution or other. My favorite example:

"We have a sense of what it means that someone is a doctor, a judge, an architect, or a priest, but these facts are only facts because we agree they are. If I say “I sentence you to 45 days in jail”, nothing happens. If a judge says “I sentence you to 45 days in jail”, in a court of law, dozens of people will make it their business to act on that imperative, from the bailiff to the warden to the prison guards. My words are the same as the judges, but the judge occupies a position of authority that gives his words an effect mine lack, an authority only exists because enough people agree that it does."


In my other class, one of the students was hugely opposed to this idea. She was completely unwilling to even entertain the thought that people’s perceptions help shape our reality. The idea that one person could start spouting an idea contrary to widely held beliefs (the world is round, anyone?) and eventually gain a following that would change our concept of the truth, was completely unacceptable to her. The thing for me is, most people these days are not going to believe an arbitrary statement that shakes up their reality – not without sufficient, repeated, and reliable evidence - even if it does come from an expert.

A subsequent blog entry had Shirky posting Sanger’s response to his arguments. Sanger strongly defends his editorial credentialing process and insists that anyone is able to post and declare themselves an expert if they provide a verifiable CV on their page. He states his belief that the question of expertise will not be the overwhelming focus of the project, as Shirky believes it will become.