A few months ago I was contemplating buying a subscription to a service when I was shown a screen that reminded me I could cancel at any time. I figured I could try it for a month and then cancel if it really wasn’t worth it.

I filled out the required fields and made the purchase. Only when I got to the post-payment page I realized my card had been charged the full year. Sure, I could cancel at any time, but that would only stop payment for the following year.

I was a little annoyed. I could have called someone and canceled right then, but I really wanted to try out the service so I stayed a customer.

Months later, through some bit of luck, I ended up hanging out with a group of people and one of them worked at this company. I didn’t mention I was a paid subscriber but I did express interest in how they do things because I am always interested in how people do things.

“We A/B test everything!” they said excitedly. “We have a team dedicated to rolling these tests out and making decisions.”

At the time of the conversation I thought this was pretty cool. A/B testing is something I have never really done. The most we did was A/B test banner ads and some post-purchase upselling.

But this morning I was thinking about how I’d implement some A/B testing of my own when I realized: that’s what happened to me. The language on the page of the service I had signed up for was written (or honed) to maximize purchases. And I would imagine it does a very good job if it is still in use.

I can imagine the success of this particular path converts many paid subscriptions but probably affects re-subscriptions a year out. Now having used the product for a few months I know I won’t be renewing. Partly because the service didn‘t do everything I wanted, but also because I felt tricked into paying that much for it.

I assume the tests don’t account for that.