top of page
  • Writer's pictureLaura Walters

How to link your Qualtrics survey to Amazon Mechanical Turk

Brent H. Curdy Duke University (Updated on 21 October, 2014)


If you are recruiting uncompensated volunteers to participate in your Qualtrics study, then all you need to do is provide them with the link to your study, found in the “Launch Survey” tab under “Edit Survey” in Qualtrics. However, if you are compensating participants with money, then you will need a way to keep track of participants, while keeping their data as anonymous as possible, so that you can send the participant their payment. Fortunately, both Amazon Mechanical Turk and Qualtrics softwares are amazingly plastic and customizable and can easily be coupled with each other. The ability to compensate Qualtrics participants is the primary reason to “link” your survey and Amazon Mechanical Turk (hereafter: Mturk). By using the following instructions, you can easily identify participants and transfer funds.

Using two different websites to collect data exposes your data to a small risk of data collection error. Fortunately, both Qualtrics and Mturk contain built-in options and safeguards that, though not perfect, prevent respondents from “cheating” their way through the survey (Forced Validation and attention checks), participating multiple times (Prevent Ballot Box Stuffing and Mturk’s limitation of 1 attempt per HIT), and mistaken payments (payment amounts are preset in Mturk). Mturk’s user policy stipulates that participants’ “services [must] meet the Requester’s reasonable satisfaction” in order to receive payment. Ensure the quality of your data by including in your survey some “reasonable” attention checks that, if failed, will stop the survey (to avoid wasting the respondent’s time) and prevent payment (to avoid wasting the experimenter’s budget).

There are two parts to create the “link” between the two sites: 1) Amending your Qualtrics survey to receive and keep track of Mturk Workers and 2) creating an Mturk “task” to recruit, and pay your Workers.

Note that you may want to include attention checks that, if failed, result in participants being unable to retrieve their validation code. This means that you will not have to compensate respondents who are not paying attention to the task. See Part 3 for these instructions.

Simple summary of how it works:

1) You will create a “task” on Mturk. Technically speaking, this task simply involves entering a code into Mturk. Participants are told that to receive a code, they must complete a survey and they are shown the link to the Qualtrics survey.

2) Incoming participants are assigns a random number (code) by Qualtrics. Workers who successfully complete the survey are shown their code on the last screen of the survey.

3) Workers enter this code into Mturk.

4) At the end of the day, or periodically, the researcher compares the Qualtrics-assigned codes (recorded in the data) to the list of Worker-entered codes on Mturk. For each verified matching code, the researcher approves payment to that Worker.

These instructions assume that you have a fully constructed Qualtrics survey and that the reader is familiar with the basic Qualtrics interface.

Part 1: Preparing your Qualtrics survey to receive Mturk Workers.

Step 1: Select and open the survey that you wish to link to Mturk. In the Edit Survey tab, click “Survey Flow” (See Figure 1). The Survey Flow window will pop up.

Figure 1: Qualtrics Survey Editor Bar with Survey Flow circled.

Step 2: At the bottom of the Survey Flow, click the green button, “Add a New Element Here” and select the “Web Service” button. Click on the “Move” button in the bottom right corner and drag this new Web Service element to the beginning of the survey flow (see Figure 2). Tip: This element can actually go anywhere in the survey, as long as it is before the End of Survey element. In fact, positioning this element immediately after your last attention check but before the End of Survey means that Qualtrics will not generate a code for participants who fail to complete your study. This will make it easier to sort codes when figuring out payment later.

Figure 2: Web Service Element at the very beginning of the survey flow.

Step 3: Enter the following URL into the URL box of the Web Service Element: _________________________________________________________ _________________________________________________________ Then click “Add a parameter to send to web service” Two boxes will appear below the URL box with a plus (+) and minus (-) sign. Click the plus (+) sign to add a second set of boxes. These two boxes will determine the range for the random number/code assigned to each participant. A range from 1 to 9,999,999 is sufficient. In the first set of boxes, write “min” in the first and “1” in the second. In the box under this, write “max” in the first and “9999999” in the second (see f=Figure 3). Now under Set Embedded Data click “Add Embedded Data.” In the first box, enter “mTurkCode” and in the second, enter “random” (see Figure 3). Note: the words “min”, “max”, and “random” must be in all lowercase. The name of the embedded data (mTurkCode) can be in any case, but be consistent when referencing the term for Piped Text (as in step 5). What this step does is add a variable with a random value between 1 and 9,999,999 called “mTurkCode” to each participant’s data. It will appear as one of the first columns in the data set.

Figure 3: Filled out Web Service Element.

Now that all participants are assigned a random number when they begin a survey, we must show them this number at the end of the survey so they can enter it into Mturk. Step 4: At the top of the page, click on the “Library” tab, then click on “Message Library” (see Figure 4). On the far right of the screen, click the green button, “+ Create a New Message”. A pop-up window will appear. In the “Category” drop down menu, select “End of Survey Messages”. In the Description, enter the title you wish to use. In this example, the title I use is “EOS with validation message”.

Figure 3: Library Tab with Message Library circled.

Step 5: Write the text that you wish to use to present the user with the code and save it. You may want to include instructions (see Figure 5). For participants to see the code they have been assigned, you must enter the following text: _________________________________________________________ ${e://Field/mTurkCode} _________________________________________________________ The section of this script, “mTurkCode”, refers to the name you gave the code variable in step 3, above. Figure 5: End of Survey form where Font=16pt, bold _________________________________________________________ Thank you for participating. Your validation code is: ${e://Field/mTurkCode} To receive payment for participating, click “Accept HIT” in the Mechanical Turk window, enter this validation code, then click “Submit”. _________________________________________________________ Step 6: Now we must add this custom End of Survey Message that shows the validation code to the end of the survey. Return to the survey editor and open the Survey Flow again. At the bottom of the survey flow, click “Add a New Element Here” and select the red button, “End of Survey”. By default, it should appear as the last element in the survey flow. If it is not, click “Move” and move it there (see Figure 6).

Figure 6: End of Survey as the very last element in the survey flow.

Now click on “Customize”. A popup will appear. Select “End of survey message from the library…” In the drop down menu, select the message you created in Step 5 (see Figure 7). After you click “OK”, note that there is now a check mark next to “Customize” in the End of Survey element (see Figure 6).

Figure 7: Customize End of Survey popup with custom message from library selected.

Step 7: You have now added random number assignment to Qualtrics; Every person who successfully begins this survey will be assigned a random number and every person who successfully completes this survey will see their random number displayed on the final screen. Part 2: Setting up Mturk. This section assumes that you have an Amazon Mechanical Turk Requester account and that you have enough funds in the account to pay for your intended number of survey respondents. There are many online resources with instructions on how to do this. However, to practice or to test your survey, consider testing your survey using the Requester sandbox site at Step 1: Click the “Create” tab. Under “Start a New Project” on the left side, select “Survey Link” (see Figure 8). Then click the orange button, “Create Project” in the bottom right corner.

Figure 8: Survey Link option

Step 2: You will now see the “Edit Project” window. Enter the appropriate information that pertains to your survey. For reference, the information needed is (text is taken verbatim from Mturk. Text in italics are my notes): Project Name (This name is not displayed to Workers) This name is used for your reference on your Mturk Requester account. On your home page, you will see all of your projects, each listed according to this name. Title (Describe the task to Workers. Be as specific as possible, e.g. “answer a survey about movies”, instead of “short survey”, so Workers know what to expect.) This is the title that will appear among the hundreds of other tasks presented to Workers. Description (Give more detail about this task. This gives Workers a bit more information before they decide to view your HIT.) Keywords (Provide keywords that will help Workers search for your HITs.) Reward per assignment (Tip: Consider how long it will take a Worker to complete each task. A 30 second task that pays $0.05 is a $6.00 hourly wage.) Number of assignments per HIT (How many unique Workers do you want to work on each HIT?) A “HIT” is a task. This question asks how many respondents you want to get. For a “survey link” task, which is what we are creating, this is the number of people who can submit codes. Note that Mturk has no way to know if the codes are correct or not, so it is theoretically possible for people to submit nonsense codes and take up all the HIT assignments. In practice, however, this rarely happens. Also, note that if a respondent does not “reasonably” fulfill the task or enter a correct code, you are not obligated to send payment, meaning that you can simply adjust the number of assignments and re-list the task. Time allotted per assignment (Maximum time a Worker has to work on a single task. Be generous so that Workers are not rushed.) HIT expires in (Maximum time your HIT will be available to Workers on Mechanical Turk.) Remember that the higher paying your task is, the faster it will be completed. As soon as the “number of assignments” is hit, it will be taken down automatically.

Results are automatically approved in (After this time, all unreviewed work is approved and Workers are paid.)

You always have the option to “Accept” (pay) or “Reject” (deny paying) a Worker. However, if you take no action after this time, Mturk will pay the Worker from your funds by default. Workers can see this time limit, so if you set up a task with an automatic approval that is far in the future (maximum is 30 days), Workers will be less likely to perform the task (i.e., less likely to take your survey). Note: Some participants are impatient and may email you demanding payment within mere hours (or sooner!)

On the bottom right corner, there is the “Advanced” option. This is where you set the qualifications for your Workers. By default, each task is set to allow only “Master Workers”. Master Workers command higher payments for performing tasks. Also, Mturk charges Requesters (you) a higher percentage for allowing you to use only their Master Workers. One can find many theories about which Worker qualifications are preferred for certain types of tasks, but the researcher should have her or his own theoretical justification for setting these limits. Generally, however, surveys do not require “Master” Workers. Click “Design Layout” to automatically save and go to the next section. If you do not choose to require Master Worker status for your participants, you will receive a warning popup that asks if you are sure (Remember, Mturk wants you to require Master Workers so they can charge you more money). Click “OK” to continue.

Step 3: Design Layout This is how your task will appear if a Worker selects it. The Mturk template is easy to follow. Note that you can enter html view to embed your survey link if you want it to look more professional. The survey link is the same link that can be found in the Qualtrics survey editor under “Launch Survey”

Step 4: Preview and Finish All pertinent information related to your survey (payment, time, appearance to Workers) will appear on this page. Click “Finish” if everything is in order.

Step 5: Launching your survey on Mturk The steps you have just taken have created a “Project”. Now you must “Start a New Batch” (i.e., “batch” of responses to your project). It will appear under “New Batch with an Existing Project” under the “Create” tab. Click the orange button “New Batch”. You will see a preview of the task. Click “Next” if it is correct or go back to “the New Batch with an Existing Project” page and click “Edit” to adjust it.

You will see a summary of your project, including the calculated financial requirements to publish your project. Click “Publish HITs” to publish… and that’s it!

Step 6: Paying participants Go to the “Manage” tab and select the project for which you want to pay participants. Download your data from Qualtrics so as to compare the mTurkCode with the Validation codes entered by Mturk participants. For each code that matches, accept the work to pay the participant. (note: I do not have any current studies to reference for this step, but this is the gist of it).

Part 3: Attention Checks

If you are distributing your survey to an unknown population, or if it contains detailed instructions or repetitious segments, you may want to check to see your participant is paying attention. To do this, you might implement Attention Checks. In Qualtrics, these usually take the form of a seemingly simple directions that contain a surprise or unexpected instruction, usually directing the participant to answer in an unintuitive way. Participants who do not read the instructions and answer in an anticipated or stereotypical fashion “fail” the check. In uncompensated surveys, failed attention check questions are usually merely noted and participants allowed to complete the survey. However, in compensated surveys where researchers cannot afford to pay participants for useless data, failing an attention check means getting ejected from the survey without payment. Many researchers interpret the failure to pass attention checks as a failure to perform a HIT to the “reasonable satisfaction” of a Requester. In this context, failing to pass an attention check counts as a violation of the Worker’s agreement to complete the task in a reasonable way. In recognition of the fact that Mturk Workers are real human beings often performing mundane or tedious tasks, it is therefore best to include attention checks early in the survey and at reasonable intervals throughout so that the participant does not invest a significant amount of time into the survey only to be kicked out at the last moment. Upset participants can and do complain both officially to Mturk and unofficially through the Mturk forums like Turkopticon where Workers advise each other of the quality of Mturk Requesters. Fortunately, in the author’s experience, Mturk Workers are usually very attentive and very few fail normal attention checks. When a participant fails an attention check, they should be advised about what happened and tactfully reminded that failure of such a check is a violation of their agreement with Mturk and of the study’s consent form (which should have mention of this contingency). You might also display a copy of the consent form for the respondent’s reference. See Figure 9 for suggested text for such a message. Importantly, the text of such a message does not contain the Validation Code needed to submit to Mturk to receive payment. If “Prevent Ballot Box Stuffing” is selected in the Qualtrics Survey Options, or if the participant has “accepted” the task on Mturk, they will be unable to otherwise retrieve the validation code. These instructions do not include a detailed description of how to create an attention check question (which involves using Skip Logic). However, I will post those instructions shortly.

A note about the structure of a Qualtrics survey:

When thinking about a Qualtrics survey, it is important to think of each piece of the survey – the beginning, the questions, the answers, the end, everything – as an “element” that can be manipulated. Although the respondent to the survey often does not experience these elements individually, they exist as individual elements within the survey design. Functionally, there are two types of “End of survey” elements: 1) the “End of Survey(s)” that can be added in the Survey Flow and 2) what I here call the “Ultimate End of Survey” that is built in to all surveys by default. Although I make a distinction between them here – “End of Survey” and “Ultimate End of Survey” – the Qualtrics interface does not; they are both called simply “End of Survey.” Their functional difference, however, is key to understanding how to implement attention check questions.

Survey Flow “End of Survey” Elements These are elements that are added to the survey flow. They are different from the Ultimate End of Survey in that they occur as a result of taking a particular path through the survey flow. Usually, placing one at the end of the normal survey flow is redundant. However, sometimes it is useful (as in the instructions above, “How to link Qualtrics and Mturk”), especially when there are two possible outcomes to taking the survey – i.e., “failing” an attention check or successfully completing the survey.

Ultimate End of Survey After the respondent answers the last question, the survey must end with something; if you haven’t selected this ending (i.e., placed an “End of Survey” in your survey flow), then Qualtrics will use its default end of survey option; the Ultimate End of Survey. Fortunately, the Ultimate End of Survey is also customizable. To edit it, however, you do not go through the survey flow, but somewhere completely different (see Step 2).

Attention Check Questions Attention Checks usually use multiple choice questions with added Skip Logic. If a question is answered incorrectly, then the Skip logic is set to “Skip to End of Survey”. This is key: The “End of Survey” referred to by the Skip Logic is not the End of Survey in the Survey Flow, but is actually the Ultimate End of Survey. Therefore, the default ending is set to the “fail” message while the survey flow ending is set to the success/Validation code message. Simple Summary of how it works: There are two “ends” to the survey. We edit the default end to say that the person has failed the attention check. Participants who successfully complete the survey avoid the default ending and instead encounter the “end of survey” within the survey flow, which contains the validation code. Step 1: Create the custom Ultimate End of Survey message. At the top of the page, click on the “Library” tab, then click on “Message Library”. On the far right of the screen, click the green button, “+ Create a New Message”. A pop-up window will appear. In the “Category” drop down menu, select “End of Survey Messages”. In the Description, enter the title you wish to use. In this example, the title I use is “Manipulation Fail”. An example of the text used in a recent survey can be seen in Figure 9. Figure 9: Example Failed Attention Check Message _________________________________________________________ Thank you for taking our survey. As stated in the Consent Form, there are certain requirements that must be met in order to participate and receive compensation. You are seeing this message because you are not eligible to complete the study and receive compensation. This may be due to any of the following reasons:

  • You do not agree to participate.

  • You are under 18 years old.

  • English may not be your first language.

  • You failed to answer a question that checked to see if you read and understood the instructions.

This follows Amazon Mechanical Turk policy, which states that “a Requester may reject your work if the HIT was not completed correctly or the instructions were not followed.” You may close this window or use your explorer bar to navigate back to the Amazon Mechanical Turk site. The Consent Form from the beginning of the study is below if you would like to review it: [Consent Form follows…] _________________________________________________________ Step 2: Add customized Ultimate End of Survey message to the Ultimate End of Survey. Go to “Survey Options” under the “Edit Survey” tab. (See Figure 10) A popup will appear. In the “Survey Termination” section, click “Custom end of survey message…” (See Figure 11). In the drop down menu, select your library folder, then select the “Manipulation Fail” message. That’s it! The message you saved will now appear as the Ultimate End of Survey message. Figure 10: Survey Options Figure 11: Survey Options Popup. 27 thoughts on How to link your Qualtrics survey to Amazon Mechanical Turk

  1. ken May 8, 2014 very good

  2. ken May 8, 2014 very good article

  3. Smithg677 May 19, 2014 Aw, this was a very nice post. In thought I want to put in writing like this moreover taking time and actual effort to make an excellent article but what can I say I procrastinate alot and not at all seem to get something done. ededbdefcacadeaa

  4. Nick May 26, 2014 Very useful, although the end of the manipulation checks seems to be missing…

  • admin May 31, 2014 Thanks for pointing that out, Nick! I added the missing Figure 11 and smoothed the conclusion.

  1. ella July 10, 2014 I have followed these instructions but the MTurk code doesn’t appear on the end of survey message, I copied and pasted the code you used?

  • admin August 28, 2014 Make sure that you have an Embedded Data field and Webservice block in the beginning of your survey (in the Survey Flow view) in order to generate the Mturk code. Then, also make sure that the name you use in the pipetext box is the same as the name you gave the Mturk code in the Embedded Data field. There are a handful of parts that could be causing a problem; without more details, it’s hard to say what the trouble is.

  1. Michael Zoorob July 10, 2014 Thanks for your extremely helpful, thorough guide.

  2. Kun August 5, 2014 Does the webservce random generator work with survey preview? I don’t see anything.

  1. Kun August 5, 2014 Never mind, I enter random as Random. The capped letter was to blame

  • admin August 28, 2014 I’ve been traveling lately, so haven’t had much time to respond recently. Glad you figured it out, and thanks for the recommendation!

  1. Kun August 5, 2014 Answered a lot of my questions. I highly recommend this article.

  2. Mary August 27, 2014 This was incredibly helpful–much clearer than any help articles on mTurk or Qualtrics. Thank you for the time and effort!

  3. Laura October 30, 2014 Hey Brent, This is incredibly helpful, thank you! Should the display of the random code in the final message happen in the preview mode? The code is generating for each preview run-through, but the message at the end appears with no code displayed (the message says “Your validation code is:” and then there is nothing after it. Any advice very welcome!

  • Laura October 30, 2014 …..aaaand, never mind. An errant capital letter in my Web Service Element. All is now well.

  1. Gina November 14, 2014 Thanks for the very clear, step by step guide. It has been helpful to me, and I have shared the link with all my doctoral students and colleagues.

  2. Mehdi November 30, 2014 Thanks for the article. I have a question. I have two blocks of questions and using randomizer. In this case, how should I use “Web service” and “end of survey” blocks?

  • Brent April 21, 2015 I’m not sure that I understand your question. The randomizer will randomly show one block or the other. The “Web service” will add an embedded data field to your survey – in my Mturk/Qualtrics tutorial, I use the qualtrics random number generator to create and embed a random number in my survey, which I then show to respondents so I can figure out who finished the survey or not. The “End of Survey” blocks that appear in the Survey Flow view are placed in the survey flow where you want the survey to end early (i.e., instead of going to the “Ultimate End of Survey” which is found under the Survey Options tab in the Survey Editor view). If you don’t have anything special happening at the end of the survey, then adding an End of Survey Block in the Survey Flow is just redundant (and won’t have any effect noticeably different from using the “Ultimate End of Survey”).

  1. William December 4, 2014 My random code appears properly when I am previewing the survey, but once I activate it my End of Survey message reads “We thank you for your time spent taking this survey”. Any ideas? Thanks.

  • Brent April 21, 2015 It sounds like the issue is that you have the random code in the wrong End of Survey component. Notice in the tutorial that there is an “End of Survey” and what I call an “Ultimate End of Survey”. Also, you will need to make sure that you put the code to pipe the code (which is embedded) so that it appears on the appropriate End of Survey screen.

  1. Chi December 6, 2014 very helpful!!

  2. John Hyde January 6, 2015 Brent – I recently joined Qualtrics from Amazon and built a wizard that automates these steps for people. After going through the ‘survey builder’ they get a Qualtrics survey that’s already preset with attention checks, screen out messages, random MTurk codes, etc. Check it out, let me know what you think.

  1. Liza January 21, 2015 An extensive and useful article on this topic. I came to find out how the unique completion code was generated and whether it could be checked manually (and succeeded). Thank you!

  2. Ari February 16, 2015 I have a question. Does M Turk give us the Worker ID numbers of the respondents? If so, can I just put a place in the survey for the Worker ID number to be entered, and then only pay the ones who give completed surveys that do not fail the attention check? The reason I ask is that I have a survey on REDCap, and it is proving impossible for me to get REDCap to provide the respondent a code at the end of the survey. This might be a second best option, although it will take more time to review responses to be paid. What do you think?

  • Brent April 21, 2015 Yes, Mturk will give you the Worker ID number of the respondents. It will appear in the interface on the Mturk site, but you can also download the CSV containing your response sets from Mturk and the Worker ID number will appear there, too. Although you question appears straightforward, there are a couple of things to consider: First, It’s important to remember that a Worker ID is also a person’s unique Amazon customer number that is linked to and could potentially be used to track a person’s wish lists, purchase history, and other Amazon activity. Therefore, the main reason I create and assign each person a random ID number is to provide an added layer of protection for respondents’ confidentiality. By assigning random numbers in Qualtrics (or REDCap, if you are able), a Worker’s ID number will never be in the same data set as their survey responses. Linking Mturk and your response set using the way that you describe means that Worker IDs will appear next to their survey responses. This is something that should be disclosed to your IRB (though in my experience, different institutions’ IRBs have varying levels of familiarity with the Mturk data collection processes). Second, Depending on the qualification settings you use for your HIT, you will get people who submit their Worker IDs without taking your survey. Obviously, you identify these people by failing to match your collected IDs with submitted IDs. A problem arises, though, if a person emails you saying that they tried taking your survey, but the power went out (or the survey crashed, etc, etc) and so they submitted their Worker ID but were unable to complete your survey. Before you field your survey, you should decide on how you are going to handle these cases. One idea we tried was to add the stipulation in the HIT description that we would only pay people who complete the survey *even if the survey terminates for reasons outside of anyone’s control*. If your survey is very long, however, respondents will get frustrated if they spend 30 minutes on your HIT, then don’t get paid when it wasn’t their fault. This is more of a caution than a suggestion, I guess – I usually just pay/accept the HIT of someone who emails me with a reasonable explanation. You must use Worker IDs to figure out who to pay. The confidentiality issue I’m referring to pertains to the way the data are recorded: Here is the way that you propose your data will be collected/recorded: Qualtrics/REDCap data set contains: Worker_ID Demographics Survey_Responses Mturk data set contains: Worker_ID ————————————— Here is the way that I and my tutorial propose your data will be collected/recorded: Qualtrics/REDCap data set contains: Random_assigned_ID Demographics Survey_Responses Mturk data set contains: Worker_ID Random_assigned_ID ————————————– Notice that in your method, the Worker_ID (a possibly identifying piece of information) is in the same data set as the Demographics and Survey Responses. With my method, the Worker_ID is kept separate from the survey responses and the randomly assigned ID number is used to match the data to the respondent. Of course, it’s POSSIBLE to connect the data, but it adds a level of separation that helps maintain confidentiality.

Comments are closed Search for:

Proudly powered by WordPress • Displace theme by Anton Kulakov

1,818 views0 comments

Recent Posts

See All
bottom of page