Tuesday, 11 July 2017

Experience sampling within psychology


Experience sampling allows for “real time,” in situ assessments of behaviour temporally close to the moment of enactment. Early attempts involved participants carrying specific devices, which were expensive and bulky, but the rise of smartphones means that this method can be deployed across a variety of research designs. For example, text messages can easily be sent using a specific account or a third-party automated system (e.g. here). 

More complex designs can also combine real-world data from smartphone sensors. This might include location via GPS, or health related data in the form of movement or heart rateExperience sampling can also can help reduce the temptation to provide social desirable responses. Most smartphones come equipped with a camera, and mobile phone apps allow participants to upload photos as supplemental data.


The changing face of experience sampling - from software running on expensive Personal Digital Assistants to smartphone applications that can be deployed to millions of smartphones.


However, this method doesn't feature as widely within psychology as you might expect and adds to my own frustrations on how slow social psychology is to adopt any new methodology. 
That said, it is unfair to level this criticism at the discipline in isolation. I also get the feeling that those who develop or propose new methods struggle to explain or convince others to adopt them. Many advances initially appear with great fanfare, but it often take several years until they become demystified/adequately explained to those who are most likely to use them. 

On a more positive note, there are moves to change this situation particularly when it comes to experience sampling with a number of groups developing open-source frameworks (e.g. http://www.experiencesampler.com/). On paper at least, these appear equally just as impressive as any expensive commercial solution.  


Self-reported anxiety levels measured twice over a 10-day period (N=51). This simple data-set reveals how even a simple measure of mood can vary considerably over several days.

Of course, experience sampling with smartphones has some limitations. Only participants with smartphones can participate (though most people today own such a device), and participants can still be strategic about what they report. Yet, experiencing sampling with smartphones will almost always improve the direction of travel when it comes to increasing the quality and validity, particularly when combining real-world behavioural markers (or digital traces) and self-report. 

Further Reading

Ellis, D. A. and Piwek, L. (2016). The future of... wearable technology. CREST Security Review, 1, 4-5. 

Piwek, L., Ellis, D. A. and Andrews, S. (2016). Can programming frameworks bring smartphones into the mainstream of psychological science? Frontiers in Psychology. 7:1252

Thai, S. and Page-Gould, E. (2017). ExperienceSampler: An Open-Source Scaffold for Building Smartphone Apps for Experience Sampling. Psychological Methods. Advance online publication. http://dx.doi.org/10.1037/met0000151 

Saturday, 25 March 2017

Apple Or Android? What Your Choice Of Operating System Says About You

This post previously appeared on CREST's Blog in December 2016.



Your mobile phone provides all kinds of useful data about what you do, and where. But does even the choice of handset say something about you? Heather Shaw and CREST Associate David Ellis tell us more.



Digital traces can provide scientists and law-enforcement agencies with a range of information about groups and individuals. In particular, smartphones can log communications (e.g., calls and messages) and behaviour from multiple internal sensors (e.g., movement and location). For example, patterns of smartphone usage can be indicative of a person’s sleep-wake activity. Location information, on the other hand, is already used to provide alibis, and can accurately predict where a person works and lives. In a very short space of time, the smartphone has become a mini version of ourselves. This is probably why people become increasingly anxious if someone else attempts to use their smartphone!
We found that Android users were more likely to be male, older, and rated themselves as more honest and open.
However, our recent research suggests that people are already giving away information about themselves before they even switch their smartphone on. The choice of operating system, whether to own an iPhone or Android device, can provide some clues about the individual behind the screen. In one study, we asked 529 smartphone users to provide information about themselves including their personality. We found that Android users were more likely to be male, older, and rated themselves as more honest and open. Android users were also less concerned about their smartphone being viewed as a status object than iPhone users. These results suggest that even at the point of purchase, a smartphone can provide reliable clues about its owner.
Following these results, we were able to build a statistical model of smartphone ownership. In a second study, our algorithm could accurately predict what type of smartphone an individual owned based on around 10 simple questions relating to their age, gender and personality.
In the future, a similar method might predict what type of technology a person is likely to adopt in the future. While this may be partly driven by an individual’s knowledge of what technology is available, it may also be governed by other individual factors relating to risk aversion. This could extend both to established technologies which are secure, but also to those which are illegal and likely to be associated with criminal activity.
The rapid rise of smart and wearable technology requires further theoretical and applied research. Understanding the mechanisms behind technology adoption can improve existing resources that mitigate security concerns. This research can be used to foresee whether a person is likely to adopt an illegal technology, or whether a person will adopt a technology which is safer and more secure.

Heather Shaw, is a Psychology Technician and PhD Student at the University of Lincoln, UK. She tweets at @H_Shawberry. David A. Ellis is a lecturer in computational social science and CREST Associate, based at Lancaster University, UK. He tweets at @davidaellis. You can view the data for this study here

Monday, 9 January 2017

The most depressing day of the year does not exist

It's that time of year again. In early January, several reports will appear in the press (e.g. The Sun) suggesting that the third Monday of January has been identified as the most depressing day of the year.

This is false.

Previous Weekday Research

It is true that some days of the week evoke strong emotional responses. In fact, a small body of research has identified regularities between weekday and behaviour, and also between weekday and mood. Across studies on these topics, two main patterns are emphasised. One is the so-called Blue Monday effect. In a wide range of situations and measures, outcomes are especially negative on Mondays. Many of these situations are non-trivial, as they pertain to health and economic matters. For example, heart attack risk is higher, suicide rate is higher, reported mood is lower, and stock returns are lower. Even emails sent on a Monday also contain more grammatical mistakes and are less positive.

Especially positive outcomes on Fridays have also been reported, but with less consistency. This pattern suggests that, at least in terms of mood, Mondays (and possibly Fridays) may be qualitatively different from the other days of the week, which are themselves relatively undifferentiated.




A second pattern emphasises gradual change from negative to positive through the week. For example, we observed that medical appointments on Mondays were much more likely to be missed than appointments on Fridays. Critically, the rate of missed appointments declined monotonically over the intervening days. This pattern suggests that, rather than being qualitatively different, Monday and Friday may be two extremes along a continuum of change.

Poor Journalism

Unfortunately, this research is constantly at loggerheads with the idea that there is a universal worst day of the year. This can in fact be traced back to a press release from a travel company in 2005. There is, in start contrast to weekday research, very little evidence to suggest that seasonal changes in mood exist - a key exception being those who suffer from Seasonal Affective Disorder (SAD).

Regarding weekday effects specifically, the big challenge for future research is to try and understand and unpick what might be driving these effects.

In an ideal world, I would rather that these issues were not continually overshadowed by the lazy reporting of a phenomena that has long since been publicly debunked.

Thursday, 24 November 2016

Media coverage from recent paper: Predicting Smartphone Operating System from Personality and Individual Differences

Shaw Heather, Ellis David A., Kendrick Libby-Rae, Ziegler Fenja, and Wiseman Richard. Cyberpsychology, Behavior, and Social Networking. December 2016, 19(12): 727-732. doi: 10.1089/cyber.2016.0324.
I am no longer going try and compile complete lists for media coverage, but this work has appeared in The Daily Mail (UK), The Mirror (UK), The Metro (UK), Slate (France), ESS (Finland), MK (Korea), Aftenposten (Norway), The New York Post (USA), The Inquirer (USA), CNET (USA), and Vogue (USA).

Monday, 3 October 2016

Wearable manufacturers are still not letting customers view their raw data

I've worn a Garmin wearable fitness tracker religiously for the last 9 months. The device has now become unreliable, but with so much data collected, I was curious to quantify patterns based on physical activity and sleep (this model measures both).

However, while I can view a daily step count using the online service Garmin Connect, I also wanted to download my total step count for each day and run my own separate analysis.

But this isn't possible (see below).





I own the device, but not the data. Garmin can provide access to an API, but this remains expensive according to this reddit feed. It would actually be cheaper to build my own device and use that instead.

Personally, I don't see how preventing customers from accessing their own raw data can continue. I understand why a manufacture would restrict access to the exact algorithm that takes accelerometer data and converts this data into steps, but my request outlined above is entirely reasonable. Manufacturers could even run competitions where people are encouraged to develop new predictive analytics/insights from this data, which could be integrated into new products and services.

The attitude of consumer wearable manufacturers here is totally at odds with other areas of the tech industry. For example, both Facebook and Twitter provide access to user generated data on request. However, as colleagues and I discussed earlier this year, manufacturers are remain uninterested or unwilling to allow users access to their own data from wearable devices.

Wednesday, 3 August 2016

Rosenberg self-esteem scale: SPSS Script

The Rosenberg self-esteem scale is a psychological inventory based on a 4-point likert scale and consists of 10 questions. It is used extensively to measure self-esteem across the social sciences.

Below is a short script for SPSS which will help speed up the coding process. 

All items should be labeled as separate numeric variables as R1, R2...etc


The script computes and prints the results for all reverse scored items and then calculates the total score


*Part 1 - reverse scoring of specific items


COMPUTE R3 = 5 - Q3.
EXECUTE.
COMPUTE R5 = 5 - Q5.
EXECUTE.
COMPUTE R8 = 5 - Q8.
EXECUTE.
COMPUTE R9 = 5 - Q9.
EXECUTE.
COMPUTE R10 = 5 - Q10.
EXECUTE.


*Part 2 - total score

COMPUTE Rosenberg = Q1+Q2+R3+Q4+R5+Q6+Q7+R8+R9+R10.
EXECUTE.


*Reliability 

RELIABILITY
  /VARIABLES=Q1 Q2 R3 Q4 R5 Q6 Q7 R8 R9 R10
  /SCALE('ALL VARIABLES') ALL
  /MODEL=ALPHA.


Friday, 22 July 2016

Open science reading list

Science has its problems, but many early career researchers (myself included) can often struggle when it comes to knowing how we can improve systems that we still very much have to operate within on a daily basis.

That said, I am a firm believer that making research readily available to others is something that we should all work towards where possible. This applies to publications, data, computer code/software and the peer review process.


The references below are taken from my own reading, but this list certainly isn't exhaustive.

All of these papers pull in the same direction. Specifically, they provide convincing evidence that open access research practices help science as well as the individual researcher.

Early career researchers, who are typically gifted very little time to get ideas off the ground and demonstrate that they have societal importance, will help their own cause by ensuring that work is readily available across multiple disciplines and beyond.

Moving forward, the next major issue for open access is no  longer whether it should be at the centre of the mainstream scholarly communication system, but how it will work effectively. 

Antelman, K. (2004). Do open-access articles have a greater research impact?. College & research libraries65(5), 372-382.


Davis, P. M. (2011). Open access, readership, citations: a randomized controlled trial of scientific journal publishing. The FASEB Journal25(7), 2129-2134.

Donovan, J. M., Watson, C. A., & Osborne, C. (2014). The open access advantage for American law reviews. Edison: Law+ Technology (JPTOS's Open Access Journal), Forthcoming.



Harnad, S., & Brody, T. (2004). Comparing the impact of open access (OA) vs. non-OA articles in the same journals. D-lib Magazine10(6).

Kousha, K., & Abdoli, M. (2010). The citation impact of Open Access agricultural research: A comparison between OA and non-OA publications.Online Information Review34(5), 772-785.

Lawrence, P. A. (2008). Lost in publication: how measurement harms science. Ethics in science and environmental politics8(1), 9-11.


PLoS Medicine Editors. (2006). The impact factor game. PLoS Med3(6), e291.

Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. PeerJ1, e175.

Sandve, G. K., Nekrutenko, A., Taylor, J., & Hovig, E. (2013). Ten simple rules for reproducible computational research. PLoS Comput Biol9(10), e1003285.

Siebert, S., Machesky, L. M., & Insall, R. H. (2015). Overflow in science and its implications for trust. Elife4, e10825.

Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: a randomised controlled trial. The British Journal of Psychiatry,176(1), 47-51. 

Wang, X., Liu, C., Mao, W., & Fang, Z. (2015). The open access advantage considering citation, article usage and social media attention. scientometrics,103(2), 555-564.

Wicherts, J. M. (2016). Peer review quality and transparency of the peer-review process in open access and subscription journals. PloS one11(1), e0147913.