Now Live Reflect - automated webapp testing without writing any code

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11
Project URL
https://reflect.run

Hi everyone! I am excited to join the webwide community and share my first post. I began working on Reflect (Reflect - Automated Web Testing) with my co-founder back in 2018. We eventually left our jobs in September 2019 and we've been full-time on it ever since.

Reflect is a tool that helps you test any website without writing any code. All you need to create a test is a URL. Our cloud-based browser allows you to interact with your website just like a normal browser. Behind the scenes, Reflect is capturing all of your actions and building up a repeatable test script. When you're finished, you can run that test script whenever you want. So, if you can use your site, you can test your site.

Here's a view of the Reflect dashboard showing a list of tests, organized by tags:

We've grown our customer base quite a bit in the last month, so we feel it's the right time to get Reflect into more people's hands. With that in mind, we have a Free Tier (no credit card) which gives you 50 free test runs a month. Check us out at Reflect - Automated Web Testing and give it a spin. Please leave any feedback in the comments. We are in our early stages and we appreciate all thoughts and perspectives.

Thanks for reading!

- Fitz

 
Last edited:
Upvote 5

kylejrp

Member
Local time
07:54
Joined
Nov 20, 2019
Messages
3

Wow, this is incredible. I'll give it a go soon. Any support for measuring performance on a test? (eg. with Lighthouse)

 

Adam

Mr. Webwide
Administrator
Local time
15:54
Joined
Sep 24, 2019
Messages
1,254
Pronouns
he/him

Awesome work! Pricing is steep for little enterprises but nice to see a generous free plan which should be good for many. I totally understand why the pricing is the way it is, I know this will save many a lot of time! Looking forward to giving it a go.

I found myself looking for something like this not long ago for a totally not web dev related reason. I wanted to get alerted as soon as a product on a store without any kind of API/feeds came back in stock. I ended up building a little PHP crawler on a cron but this would have been much easier!

 
Last edited:

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

@kylejrp Thanks for taking a look. We'd love to hear your thoughts after giving it a spin. It's a great question on the performance. There's no technical limitation to incorporating page load time measurements and best practices, etc. We don't do anything like that today since it's purely focused on the functionality and user behavior. But it's a natural evolution to get to the point where we deliver the kind of metrics Lighthouse gives you.

@Adam Thank you for your thoughts and thanks for webwide :) You're spot on with the pricing analysis. We actually just raised rates 2 weeks ago based on existing customer feedback and a few prospect meetings. Our tool has the ability to drastically reduce manual QA time, so we're mostly pricing based on that. As for setting a test for an out-of-stock product coming back in stock---Yup! You can absolutely create a Reflect test for that. Just use our "Observe Element" button to screenshot the "out of stock" text. When it changes, the test will fail and you'll get an email.

Thanks for taking a look at our product. We sincerely appreciate the feedback. Don't forget to setup a totally free account. We can benefit from learning from your test executions alone even if you never buy. Thanks!

 

Gummibeer

Astroneer
Moderator
Local time
16:54
Joined
Oct 5, 2019
Messages
1,167
Pronouns
he/him

@fitzn ist it able to remember a cookie/session or share it between tests? Or does every test have to start with inserting credentials? And do you encrypt the input values in any way?
My case would a login only app, so without an account you can do nothing.

And are test runs limited in any way? Like runtime, steps or can I test my whole app in a single run?

And how do you handle file uploads? I would have to test with multiple 100-150 MB images. Is there any limit?

And are you able to handle oAuth and popups? Like Google login and Dropbox file-picker?

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

@Gummibeer Great questions. Here are your answers:
- There is no technical limitation to sharing state between test runs, but right now we start with a fresh state on every run. We wrote the back-end execution code specifically with 'composition' in mind, which is when you specify a list of tests to execute in a specific order. For this, we'll concatenate the tests and run them together. This is how you would get around needing to login as the first step to every test. You would create 1 "Login Test" and then compose that with another test afterward. We don't support full composition yet, but it's on the near-term horizon.

We encrypt input values at rest in the DB and in transit. We obviously cannot enter encrypted values into the HTML forms, so those are plaintext. This is no different than if you hired a QA testing services firm to test your app. You might share credentials with them securely but whenever they are testing your app, the user has your input values in plaintext in their head, for example.

Anytime you want to test your logged-in experience (indeed, this is our most common use case today), we recommend you create a least privileged test user. This, again, is exactly what you'd do if you had a suite of manual QA tests that you run after each deployment.

- Test recordings and test runs have an execution limit of 10 minutes. You can test as much as you want in 10 minutes, but the idea is it'd be much better to create small, modular tests that you can compose together so that way you do not repeat yourself. So, nothing stops you from testing your whole app in 10 minutes, but if step 783 fails in a 1000 step test, do you really want to wait 7 minutes to verify you fixed the issue? We recommend much smaller tests that focus on individual components of your app. Then, use our scheduler to run all of your tests automatically and get notified on each individual failure.

- Since all actions occur within our cloud browser, we detect and intercept the file upload browser call and display our file upload dialog. You select your file, we store it, and then we inject it into the page under test. This is something we have not seen from any other competitor or vendor in the web regression space. They all require you to manually upload your file into their system first. We do it on the fly as if you were just uploading the file into the page itself. There is a limit of 10MB files right now. I know there are many use cases for larger files, but that's not our focus right now. (Of course, everything has a price 😉) Last point related to this, we automatically detect hovers in the same way as file uploads and this is something that most other vendors require the user to manually type in, including the selector. We do it all automatically.

- We'll capture pretty much any action, including oAuth, but if you're using 2FA we can't log you in since we don't have your second factor device. We detect alerts, prompts and other browser dialog boxes. I haven't used the Dropbox file-picker, but would be interested to try.

---

Thank you for the thoughtful questions!! If I missed anything or anything is unclear, please let me know. Also, feel free to create an account and let me know what you think. It's free as in free. No credit card.

 

Gummibeer

Astroneer
Moderator
Local time
16:54
Joined
Oct 5, 2019
Messages
1,167
Pronouns
he/him

@fitzn will try it out. You answered everything pretty well. Right now I depend on a puppeteer/puppeteer testsuite and think about switching to microsoft/playwright .
Puppeteer is great, but Playwright has more browser support and I have some browser edge-cases, the most interruptive one: in-app browsers like facebook one. Because they are unable to perform authenticated file-downloads because they open the link in normal browser and don't pass in the current session.

I will try to write/record a test for my dropbox-picker implementation and let you know how it worked.

For most of my projects this will be a great service! But for my primary project the limitations (primary file-size) but also prizing will prevent me. Which doesn't mean you have to change anything! I have a very specific project and needed very specific custom services at nearly all ends - unable to use FaaS because of the image sizes for example and so on.
And the prizing is based on the fact that I'm still on the building-phase of it and don't have large income I could, happily, reinvest.

And as a (backend) developer it's more native for me to write the tests in the same IDE with my known and versioned selectors instead of doing some GUI-recording work. But for agencies, projects with real QA departments this kind of service is great because the humans can focus on the important parts and edgy-edge-cases instead of always checking if the login and menu works or simple things like this.

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

@Gummibeer Thanks so much. Let me know if you hit any snags on the Dropbox case. I haven't used playwright before but will take a look.

It makes sense what you're saying on the pricing. Our hope is that 50 runs a month would get you M,W,F execution of 3 to 4 tests. That's pretty solid coverage for a simple-to-medium complexity app that changes weekly, for example. For testing needs above that, we believe it's worth the cost.

We know the feeling of wanting to write your own back-end scripts. We used to feel the same way. Both my co-founder and I are software engineers. We just found that we were repeatedly running through these same user flows after each deployment, and third-party testers were so expensive. Selenium was just more code and we felt we'd rather be writing product than testing code. That said, certain interactions might be easier to test in code. But our feeling is Reflect can hit the 90% use cases really easily and quickly.

 

Gummibeer

Astroneer
Moderator
Local time
16:54
Joined
Oct 5, 2019
Messages
1,167
Pronouns
he/him

@fitzn
first feedback:

  • I'm unable to revert a recorded step or delete it!?
  • I'm unable to paste something in an input field - both by CMD+V and right-click.
  • I'm unable to edit the input value in the right step-by-step sidebar.

All of this only applies to thee initial recording, in editing screen it works to delete a step, paste/edit the value in the sidebar and so on.
But I miss a "save" button. So save without running. I've wasted 2 runs only to get my login test done because I have to get known to the interface, I wouldn't need any of them because nothing was wrong with my app, only with my recording-skills.

I was able to destroy the interface.^^


I haven't done anything bad, only switched between my password manager and the browser window to type my password char by char. 🤔

After creating the login-test I wanted to add a "create entity" one. I would love to be able to add a "depends on test" step. This would be super intuitive and your app could handle the needed fan-in/out and checking if the test is already executed in the session or not. So I don't have to remind running the tests in order but only say: test entity and your app handles the required dependencies. Being able to use output of a test would be the cherry on top of everything. Something like Github Actions offers. My create entity test could output the ID of the created entity, so my follow up view & edit & delete tests could use this ID. Because primary if I test deleting something or run against a temporary app-instance I can't use a fixed ID.

Something else that would be great is using ENV variables. So I can run my tests against my staging and production or even dev environment and only have to adjust the base-URL.

Regarding the view organization I miss deep-level nesting. Let's say I have my project, I will have one tag for it, the second tag will be the major area (auth, entityA, entityB, ...) the third one could be the more precise like create, view, edit.
In mocha.js for example I can nest the it() functions to achieve this. This approach is great to test single areas, like all entityA or auth tests, or only the entityA/create tests.

----------

Now I will try to get the dropbox test working. :)

Okay, I'm unable to edit an existing test and have to re-record as a whole!? :o Primary for my larger tests this would be a killer, let's say I add an input and want to add it to my tests, I don't want to re-record the whole test, only add a "focus & type text" step after one of the existing ones and add a expectation at the end.

 
Last edited:

Gummibeer

Astroneer
Moderator
Local time
16:54
Joined
Oct 5, 2019
Messages
1,167
Pronouns
he/him

The dropbox-picker doesn't work. It seems like your browser is detected as a bot, but even during recording the dropbox popup wasn't able to communicate with my shootager.app page anymore.


@all: Don't try to get anywhere, all passwords changed, 2FA in place or accounts deleted. 😉

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

@Gummibeer Thanks so much for the initial run feedback. Here are responses to your questions. Please let me know if I missed anything! In case you are interested, we have public Documentation

Copy/Paste is not yet supported. There's no technical limitation here, but we haven't implemented it yet.

All of this only applies to thee initial recording, in editing screen it works to delete a step, paste/edit the value in the sidebar and so on.

Yes, this is by design. Our goal is to produce the highest fidelity test recorder possible, so we capture everything we can. As you pointed out, once you save the test, you can open the test step detail and then edit the test in all of the ways you called out, including in-line text input editing and deleting test steps:

I was able to destroy the interface
Sorry, can you clarify what you mean by this? You destroyed the Reflect interface or your app's?

I would love to be able to add a "depends on test" step.

Yes, this is the "composition" feature that I mentioned in my previous post. We do not yet support it but we've designed the back-end test execution to operate in exactly the manner you described. We handle the fan-out, you simply declare which tests are dependencies for your new recording. We'll spin up the browser and execute the tests and drop you into the Recording experience.

Environment variables, such as prod to staging URL swapping is on the very near roadmap. This is something we are already building :) Cookies and session state are similar in that they are just configuration for the browser defined at the outset of the session. This also relates to chaining tests together and passing state from one to another. We have designed the concept of "Execution Units" wherein state carries on between tests.

Deep-level nesting is an interesting thought. We haven't heard of this from any of our customers, but it makes sense what you are asking for.

---
Thanks again! Keep it coming. You can also provide feedback directly to us at Reflect by emailing [email protected]. You'll get wider visibility there :)

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

The dropbox-picker doesn't work. It seems like your browser is detected as a bot, but even during recording the dropbox popup wasn't able to communicate with my shootager.app page anymore.

Dropbox displays a CAPTCHA likely because the browsing session has no existing cookies. Reflect has not solved the problem of how to automatically solve a CAPTCHA :) Jokes aside, this is a perfect use case for defining cookies or other session state at the outset of the test execution and having the page rely on that to not display a CAPTCHA or other dynamic behavior.

I'm guessing the reason that the Dropbox window cannot communicate with the original app is because we force all pages to open in a single tab to simplify switching between tabs (i.e., not showing browser chrome). This is a trade-off of our approach right now, but gives us a good argument for allowing multiple tabs in a session.

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

Okay, I'm unable to edit an existing test and have to re-record as a whole!?

Not sure if you saw the "Re-Record" button on the test detail page. But it allows you to pick any test step and modify the test's definition from that point on. Clicking re-record causes Reflect to execute all of the test steps up until that point and then drop you into the regular Recording experience.

More information about Re-Record in the docs: Reflect - Automated Web Testing

 
Last edited:

Gummibeer

Astroneer
Moderator
Local time
16:54
Joined
Oct 5, 2019
Messages
1,167
Pronouns
he/him

Hey,

with the "destroyed interface" I mean your one. I've attached a screenshot. The step detail window was moved into the default sidebar and nothing was really clickable/visible/scrollable anymore there. Really no idea what happened and also not what exactly produced it. Just wanted to let you know, possibly you have any logs or something or can keep it in mind during future development/testing.

Yeah, the re-record is okay, but I still have to do everything after it. We have multi-step form wizards, if I add a new field in the first step I would have to test/record every following step. Or wait for composition and hope to be able to have a test for every step. Which only works if not done in runtime javascript window variable and is somehow persisted in session, db, cookies.

 

fitzn

Member
Local time
10:54
Joined
Apr 3, 2020
Messages
11

@Gummibeer Ah, I see now. Yeah that is weird. Looks like the test step detail section overwrote the test plan on the left hand side. I'll look into this. Maybe it had something to do with the copy paste? Never seen that before!

Yep, you're spot on. Re-record still requires executing every subsequent step. The plan with composition is to allow you to slice and dice contiguous regions of your test definition into their own "test" units. Then compose units however you want, including appending them in a single click. With this functionality, you can split a test of 100 steps at step 40 and 70 into three units, A, B and C. Then, tell us to start a Recording session, execute A, drop into the live Recording experience, and then when you're finished, append C. I think this would alleviate the pain you're referring to. There's nothing stopping us from doing this today---just need to build it into the UI. Gotta find some time :)

Thanks, again!

 
Top