Welcome back. This is our fourth and final lesson on new and emerging methods and new sources of data. So our outline is as follows. We'll first look at mobile web, which we've talked about briefly in our discussion of web surveys more generally. This is basically conventional online surveys completed on a mobile device such as a smartphone. Then we'll talk about interviews conducted via text messages or SMS. In some ways quite similar to mobile web, but different technology it's SMS. And kind of a different interactive flavor. And we'll touch briefly on multimode surveys on smartphones, where the issue of mode choice becomes quite important. That is which of a number of modes available to a respondent did they actually decide to use for completing the questionnaire. And can they switch modes midstream? For example, is one of the questions that comes up when thinking about multiple surveys on smart phones. Then we'll talk about new data sources that can potentially augment or even replace survey data, by which I mean self reports, of the survey that we've been talking about throughout the whole course. So, there are auxiliary data collected on smartphones. Somebody calls passive data a collection like GPS data. So global positioning information can be quite useful in understanding a respondance movements, for example. And there are other sources of data that are generated, health data, for example, are generated kind of automatically. Administrative data are records that are often maintained by governments or businesses about citizens and customers. And these sometimes contain data that are quite close to what a survey question's designed to elicit. So, if these can be used in place, if the administrative data can be used in place of a survey question, that reduces burden on respondents and saves money by not having to ask as many questions. Whether the data are exactly the same is another questions, probably unlikely. And in many cases the administrator of data are more complimentary than likely to replace the survey questions. So that raises the question of linkage, how can you link or connect the records for a particular individual to survey responses for that individual? Because the records may have a slightly different name or in some other way be not a definitive match. So linkage and also consent to link are issues that come up with administrative data. And then finally we'll talk about social media and some related data sources. That are found online but never really intended for purposes of doing the kind of work that people do with surveys, but may nonetheless be able to. As I've been describing about all these data sources, supplement or even in some cases replace survey data. So, we'll turn first to mobile web, this becomes something quit relevant now that smartphone ownership has become quit widespread. So, in some recent data from Pew, 72% of US adults own smartphones. Internationally the rates for smartphones ownership are general correlated with GDP. In 2013 Pew reported that 63% of mobile phone owners in the US go online primarily via their phone. This is not the same for all members of the public so there are differences by age, education, and income where younger respondents are more likely to go online, primarily via phone. Respondents with lower levels of education more likely to go online primarily via their phone. Respondents at intermediate levels of income are more likely to go online primarily via their phone. So, because smartphone ownership is really becoming quite widespread, the idea that online questionnaires can be administered to users of mobile devices, becomes a question we can ask and it becomes actually an important one. There really are two types of mobile web implementations, browser-based and app-based. So the browser-based implementations are just what you would think. A respondent accesses an online questionnaire using the browser app but it, the browser in the, comes with or maybe installed in the smartphone. If the user agent string which is a line of text that identifies the device, and its operating system. If this is captured, then the questionnaire can be optimized for the mobile device. Without this, it really wouldn't be possible to know whether this is a computer or a mobile device. And if it's a mobile, what type of mobile device. That really wouldn't be knowable and so there would have to be only one implementation of the interface for all users. But when optimization is possible, it does lead to a better user experience and better quality survey data. It also probably reduces the likelihood of break-offs because the users, the respondents are generally more satisfied with the experience. In contrast to browser based implementation, app based implementation require the respondents download an app which is usually dedicated to the study. But there are some general purpose survey apps that they might be asked to download as well. Kind of by definition an app is optimized for mobile device because they are mobile apps. And the notion of optimization really assumes that there is a wide range of device types involved in a study that users of a wide range of devices are responding to the study. I mean that's not always the case, some studies are just targeted at mobile users. And some are targeted at users of particular mobile devices, like iPhones. But optimization really is an important option for designers, when either there's known to be a large range of devices involved, or the range of devices is unknown ahead of time. So, a number of features differ between local and tradition web surveys. Screen size, for example. Mobile devices are almost always smaller than desktop computers and laptop computers, although the distinction is becoming more the way users register their responses user input differ. So with mobile devices generally users touch the screen with a finger where's with desktop and laptop computers they use a mouse or keypad as a pointing device. And the context of use may well differ so that with mobile web surveys, the respondents are often away from home or they can be away from home. They can be completing the questionnaire when multitasking or distracted by the environment and they maybe completing the questionnaire when they're around other people. Which is important for purposes of disclosure In the case of more conventional web questionnaires, the respondent is more likely seated and often at home or in the workplace. Mobile web presents a number of opportunities for designers. It introduces more flexibility, so the mode allows respondent to complete surveys in more situations than It's possible with a larger, more stationary computer. Either using wi-fi or the cellular network connection. Coverage is potentially improved with mobile web because it makes it possible to recruit some respondent. For example, mobile only users who might not be otherwise reachable. Sampling is potentially improved because one could use RDD-like procedures. There's random digit dialing procedures where mobile numbers are generated randomly to draw a probability sample of mobile numbers. And then send SMS invitations to those numbers. So the fact that there's a phone number associated with the mobile device opens up a lot of possibilities that really don't exist for traditional web questioners. At least in principle. And, collecting data, other than questions and answers, other than self reports. So mobile web allows researchers to take advantage of features of the phone such as GPS, camera, and other features that a traditional computer doesn't have. Mobile web presents opportunities but it presents challenges. What's known so far in the relatively small literature on mobile web is that when people respond on smart phones, as opposed to computers, they generally respond at lower rates, take longer to complete the surveys, or more likely to break off. And they are more mobile. And while they could use the mobile device much like a computer. That is sitting at home maybe even next to the computer or sitting at the work place next to the computer. They are in fact at least according to the early evidence, they are more mobile. They are out of the house, out of the office more of the time, and they're even in motion. They're moving when completing the questionnaire. So we'll spend a little bit of time in this segment talking about issues of non response and break offs of completion. And then the next segment, we'll talk more about measurement issues, including those that might be affected by mobility, like the fact that there are other people nearby which can, as we were discussing, become important for disclosing sensitive information. The idea is that a mobile web respondent might be less likely to answer a sensitive question honestly or candidly, if there are others within earshot. So, this figure displays results from a number of studies that were actually pulled together by Antoun, but these are studies that compare response rates in Mobile Web to PC Web, that has completed a questionnaire on a personal computer. By the way, the literature is somewhat mixed about the term used for conventional web surveys or traditional web surveys. PC Web is one term meaning personal computer but another term or distinction is desktop web. Meaning, not something that is necessarily a desktop computer that the respondent is using, versus a laptop, but rather that the operating system uses a desktop metaphor. You'll hear PC, web, desktop web and just conventional or traditional web survey, all meaning the same thing. As you can see, the response rate is higher in most of these comparisons for PC web versus mobile web. And so this is a relatively strong finding. For whatever reason, and we'll talk about some of those reasons as we go on. People seem to be less likely to follow up on an invitation to participate in Mobile Web than they are in PC or traditional web. One thing that can affect participation rates is the way Mobile Web respondents are contacted and invited to participate. The main distinction is between SMS invitations, which are, really, only available, only feasible with mobile web. Versus email invitations, which are possible, really, for both mobile and conventional web. The findings were quite similar in the few studies that have made this comparison. So, Mavletova and Couper in 2014 looked at the cumulative response rate over number of hours after the invitation had been sent and found that within one hour of sending an SMS invitation the participation rate was 21%, but within one hour of sending an email invitation, 11%. And DeBruijne and Wejnant found that within one hour of sending an SMS invitation, the response rate was 19% versus only 9% by email. So SMS is much more immediate, and if you think about it, it's a personal device, as opposed to perhaps a computer that one isn't carrying and wearing. It alerts the user to the fact that they've been invited. And if they want to participate at that moment, no matter where they are, it's quite possible to do that, less possible if invited by email because on a mobile device email generally requires opening the app and scrolling through the inbox. So SMS is particularly kind of immediate and this is replicated in the, across these two studies. The issue of break offs is so important to consider with mobile web because break offs seem to be higher than in PC web. And breakoffs seem to be affected by the design. By design here, I'm really referring to the distinction within a paging and a scrolling design, which a paging design presents one question usually per page. Or at least, no more questions are visible on the page, where the scrolling design generally refers to vertical scrolling, it's possible to put many questions or even the entire questionnaire on a single page and require the respondents to scroll, usually vertically but sometimes horizontally. Mavetova and Couper found that a paging design leads to more break offs than a scrolling design. Presumably because the completion time was longer with paging than scrolling, both objectively measured with timestamps, and subjectively through self report. The longer completion time for the paging design is attributed to just server transmission time. So, it's possible that as this becomes faster universally that this distinction will be reduce. They also report more break offs with SMS than email invitations. Now, just a minute ago, we said that SMS led to faster completion of the sample, higher response rate. But what seems to be happening is, as those who start the questionnaire via SMS are less likely to complete than those who start via email invitations. So while more may start with SMS, fewer are seen to complete. To my knowledge, that has not been replicated, but it's certainly something to consider. So that provides a brief discussion of issues of non-response and break-offs in mobile web. In our next segment we will move to issues of measurement. Which will include issues that have to do with the environment in which the respondent completes the questionnaire. That is whether there are others present. Whether the respondent is mobile and/or multitasking. And other issues of design, including paging and scrolling.