Luker, Data Reduction & CAQDAS

I apologize if my blogs about blogs are becoming redundant but I was quite taken by an active blog produced by Dan Hirschman, a PhD candidate at the University of Michigan. In his About This Blog section, he describes his academic and research interests as follows: I am a PhD candidate at the University of Michigan in Sociology and the certificate program in Science, Technology and Society (STS). Broadly, I am interested in economic sociology, the sociology of economics, organizations, and science studies. Specifically, I am interested in the interaction of quantification, law, organizations and knowledge-production. His research interests are multidisciplinary and I couldn’t help but wonder how his inquiries would be framed in and influenced by the context of an information science degree here at the iSchool.

In any case, I came to know this blog by searching for reviews of Luker. While I agree with much of what she says and find her analogies quite plausible, I was looking to come to an understanding of what graduate students at other institutions and from other disciplines felt about her work. In a post devoted to his reaction to Salsa Dancing into the Social Sciences, a methods book that he claims not only to have read but to have devoured, he raises key insights into her strengths.  He astutely expounds on Luker’s main notion of finding balance between the logic of data and the discovery of a good a research question. He also praises the fact that she sheds light on sampling, operationalization and generalization as not quite sufficient for research that aims to generate theories. This point is actually something that I grappled with while conducting my peer review assignment as the research questions posed in the paper I chose to review had mostly theoretical implications.

To draw slightly further on Hirschman’s review, he claims to not be completely in agreement with Luker’s endorsement of CAQDAS (Computer Assisted Qualitative Data Analysis, see p. 200) and Charles Ragin’s method of Boolean analysis (Qualitative Comparative Analysis – QCA) which is a method that works “out an algorithm that most economically describes the patterns observed in the data.” (Luker, 2008, p. 209) Luker argues that Ragin’s method “permits us to see both the messiness and the contingency in social life, while at the same time recognizing the patterns.” (2008, p. 213)

I understand that the algorithm recognizes patterns and is not intended to measure the messiness of life but the skeptic within does not trust its ability to make meaning out of nuance.

In any case, Luker’s chapter on Data Reduction  hones in on the intricacies of INF1240. Learning about research methods is somewhat linear however learning how to effectively use the methods and conduct data reduction in a way that allows us to “reduce our data to something we can manage, and analyze our data in meaningful ways,” (Luker, 2008, p. 198-199) is where practice and theory meet.

As a final thought, Hirschman cites one of his favourite examples from Luker and I would like to share it with you as it quite possibly may put a smile on your face:

“Librarians, along with pediatricians, are among the greatest human beings in the universe.” (Luker, 2008, p. 85)

– Vanessa K

Baseball and Peer Review

I recall last week that Professor Galey was talking about how baseball (or was it softball?) would be a useful metaphor for peer review, or at least I think that’s what he said – my memory is hazy. Either way, this article provides a helpful way to think of peer review, and should be useful as people begin proofreading and submitting assignments!

Alan D. Meyer. “Balls, Strikes, and Collisions on the Base Path: Ruminations of a Veteran Reviewer” in Frost, Peter, and Taylor, M. Susan. 1996. Rhythms of Academic Life: Personal Accounts of Careers in Academia .SAGE publications.

Edit: I originally forgot to add my name!

Happy peer reviewing,

Katherine Laite

19th Century painting at the Salon in Paris and the rise of Impressionism: a peer review case study.

A) Art is my background and my benchmark.
B) I also really only understand things when I can relate something new to something I know.
If you are following my deductive reasoning up till now you could rightly infer C:
C) that I learn best when I relate things I learn to my background in art.

Now that I have presented my researcher/how-i-understand biases, I would like to present my two-part blog post on how I am coming to terms with peer review. I should also say the Sokal affair and the ‘scandalousness’ of the event regarding peer review has informed my two case study choices.
In this first post I wanna discuss the salon culture found in Paris throughout the 19th Century and how the artists dealt with state sponsored jury selection (read: peer review).

In 1791, the newly formed republic abolished the Academy and made the Louvre open to all artists and also decreed all artists had inalienable right to show their art, granted the work was not offensive to public standards (Hauptman 1985, 96).
The salons from that point on had a jury select the work that was to be exhibited*. The nature and tastes of the subsequent juries however seemed to change like the weather. “Jean Gigoux, for example, who was decorated in the Salon of 1835, had his painting of Antoine et Cleopatre rejected in the Salon of 1837, only to have the same painting accepted in 1838, although there was no change in the composition of the jury” (Hauptman, 1985). This lead to criticism of the entire jury by critics (100) and even petitions sent to the King in 1843 (105). As a result of the growing discontent with the salons, private galleries began to present competing shows of the artists that had been rejected by the salon (99). And, in 1863 the Emperor gave the rejected works an official salon alongside the official Salon (Wilson-Bareau 2007, 309) where Manet’s infamous Luncehon on the Grass was exhibited after being rejected by the salon that year**.
Manet’s work soon became a rallying point for up and coming artists who would latter become known as impressionists and develop the leading style in French painting at the end of the 19th Century. They even put on an annual Salon that was held the same day as direct competition to the original Salon. (Delacour and Leca 2011) While the State sponsored Salon continued to influence public approval of the artists, a quote from Renoir shows how those subjected to the scrutiny of the jury truly felt: “My submission to the Salon is just commercial” (Delacour and Leca).
Using Luker’s (2008) idea that the peer review is the “gold standard” (69) for scholarly research I would like to make a claim that a similar idea presided over the Salon. Wiliam Hauptman (1985) writes: “the nineteenth-century Salon was the only viable avenue for public exhibition […] the acceptance of a work of art by an established and reputable jury signified a tacit measure of quality. Refusal by the jury, on the other hand, was often equated with critical denunciation, the results of which might severely limit the artist’s means of earning his livelihood” (95-96).
With the ability to act as a filter for public approval and consumption of art, I would make the case that the jury for the salons had similar if not parallel legitimizing power found in the peer review system in the academic journal world Luker describes. Both systems rely on their ability to maintain approval of their filter capabilities.
The example of Manet’s Lunceon on the Grass is a great example of when filtering restricted the approval of innovative thinking. Years of seemingly unpredictable jury criteria lead to artist and public unrest and evolved into private exhibitions of the refused art and culminated in petitions to the Emperor that lead to the the Salon des Refusés of 1863. On the other hand, the Sokal affair is just as exemplary for two few filters in the review process. His hoax was able to pass the minimal screening provided by the editors because they were optimistic that the article would benefit the readership of the publication, regardless of the credibility of his claims. Social Text as an agent of legitimization of the theories it published, at the very least feared losing its perceived authority and now has a peer review process.
I know some may argue that an art jury and a scholarly journal are two very different things, that said in light of Renoir’s comment earlier, I ask you to consider this: when the review process fails to contribute to your peer’s work or understanding of their work, is it really peer review, or has it changed into something else?

-Richard
(next post – Duchamp and R. Mutt’s Fountain rejection from The Big Show)

***Arguably the beginning of Western European Fine Art’s transition into its ‘early modern’ period and the age of impressionism. (https://www.youtube.com/watch?v=7DnQRsS276Q a BBC Series “The Private Life of a Masterpiece” *WARNING* art-based nudity)

**except for the years 1799 & 1848

Wilson-Bareau, J. 2007. “The Salon des Refusés of 1863: a new view.” The Burlington Magazine, 149 (1250) : 309-319.

Luker, K. 2008. Salsa Dancing into the Sciences: research in an age of info-glut. Cambridge, MA: Harvard University Press.

Hauptman, W. 1985. “Juries, Protests and Counter-Exhibitions before 1850.” The Art Bulletin. 67 (1) : 95-109.

Délacour, H. and B. Leca. 2011. “The Decline and Fall of the Paris Salon: A Study of the Deinstitutionalization process of a field Configuring Event in the Cultural Activities.” Accessed March 12, 2013. http://search.proquest.com.myaccess.library.utoronto.ca/docview/926417472

Anthropological Musings

Hello again ladies and gents. This is my first topic starter, and it’s related to our class discussion today about Luker, anthropology and participant observation. One of the most interesting subjects Luker touches on is the idea of trust in anthropological methods. This reminded me of a study conducted by famed anthropologist Margaret Mead. Her classic anthopological study Coming of Age in Samoa involved both participant observation and interviews, but also the building of relationships and trust. Mead lived in a Samoan village of about 600 people for her study, and got to know people very well. She concluded that Samoan girls transitioned into adulthood with relatively little turmoil (compared to American girls) because they had strong rolemodels and belonged to a community where they were well educated about sexuality, bodily functions, and death. She claimed that Samoan girls were free to explore their sexuality until they settled down and married later on. This quite shocked the American public when her study was first published. Mead’s work was later criticized by Derek Freeman, who returned to the Samoan village she lived in and concluded that her research was based on falsehoods. He even wrote a book attacking her findings, Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth.

However, Freeman’s work was also criticized, because his evidence was based on interviewing the same women Mead had spoken to years earlier. These women were now mothers and grandmothers, who had converted to Christianity in the meantime. Freeman’s critics argued that this conversion and the adoption of American cultural practices had changed Samoan life to the point that it was relatively incomparable to the culture Mead had lived in previously. Others pointed out that Freeman’s gender (male) could have made the female participants hesitant to tell him truths about sexuality that they had been willing to reveal to Margaret Mead, because she was female but also because she had spent time earning their trust, and was of a similar age to them.

I think this tale illustrates the issue of trust and participant observation. It seems to me that Margaret Mead’s approach was to surround herself in the culture and even make friends with the Samoan girls who were the “subjects” of her study. Freeman’s work was later criticized more thoroughly, with some scholars suggesting he had some kind of personal vendetta against Mead.

Can you think of other examples, from Anthropology or other fields, where these issues of trust/gender/participation might have effected the outcomes of studies? I think this is a really interesting topic to navigate, and hope you’ll join me in discussing it.

-Katherine Laite

Random observations …

This blog entry consists of a series of points based loosely on the readings.

(1) Two detectives arrive at a crime scene. Detective One says, ‘Oh look! It’s a burglary. Didn’t I see a suspicious fellow down the street? Hmmm. What evidence links this crime to that fellow?’ Detective Two, on the other hand, looks at the base of a broken window, eventually bringing forth a magnifying glass. He asks himself, ‘Did a crime occur here? Can I find some fingerprints? Which signifiers suggest motive?’

I think this scenario evokes the significance of sampling in the humanities. In the social sciences, the concept of sampling (Knight 119–26) is of course linked to polling (large scale) or focus groups (small scale). In the humanities, sampling emerges through shared notions of authorship, theme, genre, and period. By bringing into play a wide range of historical and cultural evidence, a critic could argue just about any case, provided that his or her answer agrees with public tastes.

(2) Claimsmaking (Knight 16–48) reifies an argument (a ‘claim’) into some kind of object (i.e., something which can be made). As a term, it’s rather deceptive, certainly less effective than criticalmaking.

(3) I’d like to address Knight’s point that “[some strategies of writing allow scholars] to amass sprawling and self-indulgent descriptions that are free of meaning or claims” (194). In many cases, these kinds of passages merely restate critical claims.

Josh

Peer Review? Or Fear Review?

“The … problem is that peer review as we currently practice it isn’t simply a mechanism for bringing relevant, useful work into circulation; it’s also the hook upon which all of our employment practices hang, as we in the US academy have utterly conflated peer review and credentialing. “(Fitzpatrick, 2010).

This quote from Fitzpatrick’s blog entry pretty much sums up my feelings about the peer review process in a nutshell.

While I have never had my work peer reviewed or published, I often wonder whether the peer review process has gone the way of other rites of passage that are now more hindrance than help.  The first example that always springs to mind for me is the unpaid internship.

In its purest form, I think it’s a good idea.  The intern receives experience in a relevant field, learns new skills, makes connections and gets to experience “real-world” work.  However, the idea’s popularity has become its downfall.  Organizations create unpaid internships and students trip over themselves to get a foot in the door.  The number of internships goes up while the quality goes down and now it’s just considered a rite of passage to spend summers doing thankless free labour in order to be considered suitable for a paying job (one day, maybe).

The peer review and publication process strikes a similar note with me (and I welcome criticisms of this point from people who have actually undergone the peer review and publishing process).  But from friends doing PhDs and friends who already have them, I get this sense that the need to publish in quantity or to publish for a higher power (such as a corporation funding your work) has superseded the need to publish material that is true to the heart of one’s research.  In my Foundations of Library and Information Science class last semester, we were shown an example of a study was that rushed ahead in the publication process.  As a result, a reviewer completely missed a major flaw in the study and was subsequently lambasted for it.  I wonder how often this scenario occurs: what are the odds of an academic twisting their material for a better chance at publication?  Or submitting unfinished research in the desperate hope that they can get their foot in the door?

In addition to this, academia is often labelled an “old boy’s club,” so younger academics need to load up on publications in order to be considered on the same level as established figures.  They’re the ones doing the research of tomorrow, and they’re the ones who risk undercutting their own research in order to get the numbers they need.  As a result, it seems that academia suffers as a whole and nobody wins.

I really like Mike O’Malley’s idea of crowdsourcing peer review via online forums (O’Malley, 2010).  An “elbow-less” space might be more productive and help to enhance the spirit of collegiality in academia.  The problem that arises with this, however, is the same problem that continues to ensure that unpaid internships will be a “thing” for the next few decades.  Unless all academics and all unpaid interns agree as a body to partaking in a system that is less than ideal, the system will continue to thrive.  Unfortunately, our society is such that when called upon for a unanimous act that will assist everyone equally, there are always people who choose instead to climb over the backs of their peers.

If I had more time to write an entire paper on this subject, I’d actually love to go into detail about the perils of crowdsourcing peer review (trolling, anyone?), the regional differences in academic publishing culture, and whether or not we should approach “fixing the system” from a different angle.

Yours in research,

Laura

Fitzpatrick, K. (2010, October 25). Peer-to-Peer Review and Its Aporias [Weblog post]. Retrieved from http://www.plannedobsolescence.net/blog/peer-to-peer-review-and-its-aporias

O’Malley, M. (2010, October 19). [Web log message]. Retrieved from http://theaporetic.com/?p=446

Methodological Pun-dits

I wanted to post this as a comment to Vanessa’s great “Thinking about Peer Review” post, but it wouldn’t let me comment with a photo. Probably for the better, her work is actually making a point. -Richard