Before COVID-19 hit, Experimaker was due to launch in December. Now, a large number of researchers are trying out online experiments for the first time.

So, we are launching Experimaker earlier than expected, as a free open beta. We want to make a positive contribution to the academic community in these challenging times.

Because of this, many planned features will not be included in the initial release. We will be working around the clock to release these features as fast as possible.

We will be relying on feedback from the community to help us decide which features to prioritize. So, if a specific feature is stopping you from using Experimaker, please let us know, either by email, or through the forums. Your feedback will directly affect Experimaker’s development.

Launch features

  • Editing interface overlaid over real-time experiment view
  • All experiment edits immediately reflected in the view
  • All editing performed through GUI
  • No coding skills required
  • Navigate directly to any part of the experiment in one click
  • No need to run through the entire experiment to check trial #10!
  • Media control buttons for next/prev trial or block
  • Media & slider controls for timepoints
  • Jump to specific block or timepoint from sidebar
  • Timing handled by WebAssembly
  • This allows for precise, millisecond-accurate timing
  • Greater precision than is possible in an interpreted language (such as Javascript)
  • By default, reaction timer begins on trial start
  • Or choose to start timer on presentation of a specific stimulus
  • Server interactions kept to a bare minimum
  • Only 2 server interactions per experiment:
    • 1. At start, to load all media & logic
    • 2. At end, to send results to server
  • This minimizes lag and timing delays
  • All media stored & presented using state-of-the-art compression & decompression techniques
  • This allows for more precisely timed media presentation
  • Trials generated automatically from stimulus sets
  • Manually define trials if required
  • Divide the view into containers
  • Select between layout presets
  • Choose between 9 positions (anchor points) for media wrt container
  • Choose between automatic scaling to container or 1:1 actual size rendering
  • Images
  • Text stimuli
  • Paragraph blocks
  • Coming soon: sound, survey items & video
  • Fixed (manually specify order)
  • Random
  • Set max number of repeats per item
  • Coming soon: minimum distance between repeated items
  • Coming soon: specify different ordering options for individual containers
  • Remove the editing interface
  • Run experiment from start to finish as participant would
  • Preview save data
  • Recommended before going live
  • Mouse
  • Keyboard (choose valid inputs)
  • On screen button

Coming soon

These are just a few of the features we feel are urgent to include. They were originally meant to be included for launch. If there’s any feature you’re waiting for – whether mentioned here or not – please let us know by email, or through the forums.

  • Support for third-party participant finding services with redirect URL
  • Native app for participants
  • Consistent hardware experience guaranteed
  • Support single device or multiple
  • Aspect ratio preview provided
  • Video
  • Audio recording
  • Video recording
  • Share privately via email
  • Share to public database
  • Choose what to share

    • Stimuli
    • Experiment logic
    • Saved data
    • Audit log
    • Experiment stats
    • Or any combination of the above!