#1 TikTok Ads Spy Tool

A Better Way to Make TikTok Ads Dropshipping & TikTok For Business

  • Find TikTok winning products & TikTok dropshipping ads.
  • Analyze TikTok advertisers
  • Get the Latest TikTok Shop Data.
Try It Free

google ads experiments

Published on: February 3 2023 by pipiads

Testing and Experimenting in Google Ads

hey bear, I'm Benjamin from love starter. in this video, I'm going to show you three ways you can run a test inside Google Ads. experimentation and testing is an important part of optimizing your Google ads campaigns. you can run tests to improve metrics like your click-through rate, your conversion rate and even engagement metrics like bounce rate. here's what we'll cover. first, you'll learn how to run a test to determine which ad copy is the most effective. second, you'll learn how to test the success of different landing pages for your ads. and finally, you'll learn how to create and run an experiment inside Google Ads to test other changes to your campaigns. okay, it's time to open up our Google Ads account and get started. running experiments in Google ads lets you understand what works and what doesn't work in your account. this might sound strange, but I found that when it comes to Google ads, it's never a case of one-size-fits-all. things that work well in one account, like partikular ads and configurations, don't automatikally work well in another account. so experimentation lets you find the best options for your organization and your audience. ok, let's start with ad testing. there are a couple of ways you can test your ads. the simplest option is to create at least two ads in each of your ad groups. let's open a campaign and then an ad group. here we can see we only have one odd variation, so we're not currently testing our ads. we can then create a new ad to see if we can improve our key metrics for the campaign. I recommend starting by writing an ad that is dramatikally different to existing ad. by testing major changes, you are more likely to see a measurable result. I recommend including different features, benefits or selling points in your new ad, and even trying a different call to action. once you're happy with your new ad, you'll need to save it and wait until enough data has been collected before making any decisions. typically, you'll want to wait at least two weeks while the experiment runs before applying changes to your campaigns and account. this tiknique can also be used to compare the performance of different ad formats, for example, if you wanted to compare a standard text ad to a responsive search ad. apart from creating separate ads inside each of your ad groups, you can also use the ad variations feature to test changes across multiple ads at the same time. for example, you could test a different headline across all of the ads in your account to understand how the change impacts performance. if you already have a campaign or ad group selected, then check that you're viewing ads and extensions and then click the plus sign to create a new ad. you can then select add variation and, just to point out, you can also find this feature by selecting all campaigns, then drafts and experiments, and choosing add variations. okay, just to decide if you want to test the change across your entire account or in partikular campaigns. for this example, I'm going to leave all campaigns selected. then you need to choose the types of ads you want to test. currently, we can choose from standard text ads or responsive search ads. I'm going to leave text ads selected. now we have the option to add a filter so that only certain ads will be changed. for example, we might want to only change ads that use a partikular headline or send people to a partikular page on our website. let's click continue. now we can define what we're going to change in our ads. we can use Find and Replace, which lets us find a partikular word or phrase and change it to something else. we can use update text to change multiple elements in our ads at the same time, and we can select swap headline 1 and headline 2 to change the order our headlines are displayed. for this example, I'm going to select Find and Replace. i can then enter the word or phrase i want to find, choose where it is located- for example, selecting headlines will look for this phrase in all the available headlines or headlines and descriptions will look for this phrase anyway in my ads- and then I can enter what I want to replace the phrase with. I can then click continue. I can name my new ad variation, choose an end date and choose the experiment split. I recommend leaving this as 50% so that you see measurable results quickly inside your account. then we just need to click create. that's it. now we need to wait until enough data is collected. you'll then be able to come back and see the impact of your changes. we can also use this tiknique to test different landing pages for our ads. let's say we have two potential landing pages and we want to know which landing page we should use in our ads. we can use ad variations to understand how the different pages perform. to do this, let's create another ad variation. again, I'm going to leave all campaigns and the other defaults selected. let's click continue. we keep search and replace selected. we enter the current URL, then select final URL and enter the new URL we want to test. then click continue. I'm going to name the ad variation, leave the other defaults and click create. that's it will now be testing the two different landing pages in our Google Ads account. I also want to mention that there are other tikniques you can use to test your landing pages. Google optimize is a great way to run experiments on your website after people have clicked through from your ads. maybe that's a topic for a future video. now I want to show you how you can run an experiment in Google Ads to test other changes to your campaigns. for example, if you wanted to test changes to bids or keywords. to run an experiment, we need to start by creating a draft of the changes we want to test. you can find this by selecting all campaigns, then drafts and experiments and then campaign drafts. let's click the plus sign to create a new draft. we then need to select the campaign for our experiment, name the draft and click Save. we can see the draft for our campaign on the left. now we can make the changes we want to test to our campaign. we might change keywords or the different targeting options for the campaign. for this example. I'm going to keep things simple. I'm going to test the bead used for the campaign, so I'm going to select settings and then I'm going to adjust the bead once you're happy with the changes you want to test. it's time to head back to drafts and experiments. for our draft, we need to name bearment. we can set a start and end date and an experiment split. I'm going to leave the defaults for my example and I'm going to click Save. our experiment will now be created. once our experiment has been created, we'll see it displayed as a small Baker icon on the Left. we can select this to see the data that has been collected and our results and, as I previously mentioned, I recommend waiting at least two weeks, preferably three, before making a final decision on your experiment. so they have three tikniques you can use to run tests inside your Google Ads account. it's best to run a single test at a time and I recommend starting by testing one variation against your original. this allows to see results quicker as you get started, compared to testing lots of different variations at once. what are you going to test in Google Ads? I'd love to know. let me know in the comments below and if you found this video helpful, then please subscribe, share it with your friends and hit the like button so I know to make more videos like this. see you next time.

Tutorial: Grow Your Google Ad ROI With Experiments

hey, what's up? this is daniel barrett from adwordsnerdscom, and in this video i'm going to give you a quick tutorial on my favorite addition to the google ads interface: the experiments tab. let's jump straight to the video. [Music]. hey, just wanted to make this quick video to walk you through the experiments interface, which is pretty new. actually used to be far more complicated. they made it a lot more streamlined, and if this is not something you're using, you should use it, because it is awesome and you can use it to test pretty much any campaign setting, any ad variation. you can use it to test different landing pages, you could use it to test all sorts of different stuff and you have a lot of control over how big a test you do, and it's just really amazing. so let me show you. this is just in my kind of test account, so it's not showing any kind of data or whatever from clients. if you're in the campaigns section, you've got to be looking at all your campaigns to do this. so you're just in the campaign section here, you go down on the left and you've got this thing that says experiments and you go to all experiments. now, if you don't have an experiment, totally fine, all right. so what you're going to do, you're going to go ahead and click plus and then you're going to do: now you could do optimizing text ads or whatever, but let's do custom experiments, okay. and you're going to choose search, because we don't typically do a whole lot of display advertising here. so we're gonna go ahead and click continue. now what you can do is name the testing something. so say test performance max, right. and then, uh, down here you're gonna have base campaign. you're going to select this by clicking the thing and you're going to choose whatever campaign you want to test against. so, whatever your winning campaign is, whatever you're considering switching over to performance max, that's what you would choose. so you click that there. so that's the base campaign, and then you're going to have the trial campaign. you can put this as a suffix so you can see when you look at the name of the campaign, which it is. i'm going to go ahead and click save and continue. now what that's going to do is open up your trial campaign and now for this campaign, you can change anything you want. i could pause an ad group or i could, um, come into the settings and i could change the bid strategy that i'm using. so instead of cost per click, i could change my bid strategy to maximize conversions or whatever it is. just go ahead and save those changes. now, once you've made all the changes you want to make to this trial campaign, whatever you want to experiment, typically one thing at a time would be best. you're going to come up here to where it says schedule and first thing we're going to do is set goals. so what i want to measure is my- let's say my- cost per conversion and i want that to go down, so cost per conversion decrease, and i want my conversions to go. you can change it to match whatever you are trying to measure. how many clicks do i get? how many impressions do i get? whatever, you can set the percentage of the budget that you want to give, but between the two versions. so, for example, you could say i want the trial campaign to get most of my budget or i only want the trial campaign to get 10. this would be really conserving. typically you're going to want to put this at 50 because for most investors not don't have a big budget. it's going to take forever if you don't give it enough juice. so just set 50 percent and what you want to do here is you could set a start and end date, i. so i suggest saying no end date and just manually checking. but if you don't think you're going to remember to like manually check on your experiment, you could set, for example, duration- you say it's going to run for 100 days. or you could set a specific date you want it to end on. i'm going to choose just none and then, yes, this way, if you make a change to your base campaign, it'll automatikally switch over to your trial campaign. generally, that's what you want to do. and then you click create experiment. that's not going to work for me, but what that does is, once you've done that whole process. and let me go back here, show you what this will look like if you come down to experiments. now you've got these two experiments running. so, for example, it does broad match, certainly. and if i click on here, it's going to show me my two experimental campaigns running side by side over whatever time period i'm looking at. it'll show you the difference. it'll say, oh, like, this campaign here got two conversions and this campaign here got 10, or whatever. i'm showing how much of a difference it is. it'll even show you the percentage of difference and whether or not it's statistikally significant and then, if you like, the results. when you click apply experiment, it basically makes your trial campaign the default campaign. similarly, if you click end experiment, it just stops putting money into the trial campaign. you can see these campaigns here if you go to enabled, see how this one has a little beaker here. that means that's an experimental campaign. i'm actually going to just to show you what this will look like, because this was just a an example. i'm going to come to experiments. does broad match herbie? then i'm going to end the experiment prematurely. boom, here you go. now that experiment is over, it says it's inconclusive and now 100 of my budget is back where it's supposed to be. hope that makes sense. let me know if you have any questions. but if you're not using this experiment, then it's really amazing, because it's a safe way of testing big changes without losing what you already have. let me know if you have any questions. [Music].

More:dropshipping animal products

Small experiment ideas in Google Ads

yeah, so let's, let's close here by toking about testing, um, smaller items in, yes, google ads with experiments or smaller ideas. um, let me go through each one and what you can kind of give me a response of, if you like setting up an experiment, and then kind of why you is this a valid thing to test or have you tested this in the past or would you like doing an experiment to test it? so the first one that came to mind are calls: bid adjustment. um, this is under the um. we had a debate about what it should be called. what it's called- forget what it's called right now, but there's a different area in your sidebar. i'll just pull it up here, chris, for accuracy, called advanced bid adjustments, and then one of the interaction types there- the only one, chris- is called calls and you can basically uh, increase your bids when they're willing to show your call extension and try to get the call extension to show more. get more calls, chris, i like to be aggressive with this sometimes and increase it by 50, 100 sometimes. do you think that that needs an experiment or what do you think about that? uh, i'm okay, i'm gonna, i'm gonna say i'm gonna call this the lightning round and i'm gonna give it a thumbs down. no, i wouldn't do an experiment on that. you would just turn it on and see if you got more call. extension calls- yeah, i would make your conversion rate went up, or more conversions, yeah, and most of the time, if you look the amount of times that this influences the bid is actually minimal. it's not always the case, you know, not minimal, but it's definitely not always, you know. yeah, it's definitely so. i would say no. i would say no because it's easily reversible and it's only one thing that you're changing. uh, bids are not one thing, you know. changing your bidding strategy is not one thing. changing to broad match keywords is not one thing. changing your landing page is not one thing. uh, this is one thing. don't think it's worth the experiment. the only time i would think it could be called for with an experiment is if you're doing an extreme adjustment, like plus 300 percent, or just slow down. yeah, just something. really, you know that's heavy. yeah, sorry, i just uh, i've never done that in my, i've never done 300. okay, let's say you've got search partners off right now and you want to run your campaign with search partners on and you don't know how much traffic is going to go to that. it could be the normal- we've seen over the years three to ten percent of your traffic going to search partners. or it could be something crazy, like we've seen recently sometimes, where fifty ninety, 90 of your traffic goes to search partners, especially if you're not bidding enough um to show up often on google search, turning up running an experiment where all you do is click search partners back on. what do you think? um, i wouldn't run an experiment. you would just take the risk- a lot of your traffic going to search partners more than you're comfortable with, and just segment out the data day by day and make sure you're on top of it. okay, i'd make a note and then i'd segment it, you know, after a certain amount of time to see if it's. but we should, we should tok about search partners in a upcoming episode. i i have a tone change, that on search partners, uh, of some things i've seen that's kind of changed my mind. so that's a different topic though, but i would not do it whether the tone is positive, negative. but that's not good podcasting. i'm gonna. i'm gonna live the tease with the audience here and experience the tease. well, i guess we'll tok about it. you want to tok about some stories to to share? so, yeah, you want to tok about it next week? uh, maybe. oh, chris, you guys stop right now. we are. we are fully teased right now. so, um, what about tur? what about turning the display network on at the settings? and this is like when it's a regular search campaign, but then when a website or a user that google thinks is similar to your search audience, it's easier to, they can show your display. what do you think about that? i again, it's such a, it's such a binary, on off kind of thing. i just go and turn it on. no experiment. what about new languages, going from english to all languages? that, because this is something we can't get data on, at least that we know about, yeah, 2022, like we can't go. no segments. how much came in from spanish, from russian, whatever? there's no segmentation. and so i like an experiment on this, chris, because it controls the date range for you. it controls the how much you allocate percentage-wise, and at least it separates things and you don't have to remember, like on this date, i added the languages and then what is the uh dated now versus the previous state range. it just gives you that data. and the other thing i like is it kind of focuses your mind and you go: no you know what, for two weeks or a month or whatever. we're not making any changes. we just want to isolate for this and see how the campaign runs when all languages are on, just to see if there was any different and if we can tease it out. and i feel like taking the formality to the formal steps to run an experiment can kind of help influence or help give you that discipline to either not make changes on the experiment campaign or or both. um, but i i like that idea just because we can't really do it any other way. i'm absolutely with you on that. one new languages experiment for sure, because it's not- i like it not reportable, not reportable, okay. and another one here on the settings page: what about the most targeted or the most isolated advanced location option: people in or regularly in your location. you have that set up and then you layer and then you change that to people in or searching about or interested in whatever it is interested in your location and opening it up. do you think that's valid to? to see the data on in an way. wow, absolutely again, because there's no reporting. now, skip back a couple years to back whenever google actually showed us people in versus other and they, and they actually showed that data. you know they don't want to show that data. um, so, yeah, absolutely. so i guess what it comes down to. what you've isolated here is if it's a single binary change, uh, and we have reporting on it, so we may not need, if segmenting of some kind, if we can, if we can get data on it, specifically on that one change, then it may not be worth an experiment. but if it is a something that cannot be isolated and reported and segmented, then experiment seemed like a great solution. very, very well noted. there, you were doing a very subtle thing. i like that good question. yeah, i was a little worried because i was like, maybe none of these little small ideas are worth doing an experiment on. but then, um, no, like, yeah, then you like that one. wow, yeah, good point. i never, i never thought about it that way, but that's uh, that's a good point and i mean. one final thing: i just went into a campaign today and, uh, i had set up site links on we. we did a bunch of new campaigns and i set sitelinks up on only one campaign. i thought they were at the account level. turns out they were on only on uh one campaign. so i added them to all uh, to the account level, so they hit all campaigns. but it had me thinking, like we always wonder like how much of how much do these extensions matter? or sometimes people think like, um no, i want someone to read my ad copy, i want them to go to that specific page. i know when someone clicks on my home page, it's we invested in it, we've got tracking, we've got all that stuff. i want them to go there. their conversion rates are high. my service pages suck. they're set up like blog pages. still they don't have lead forms. i know it's not going to convert, so i don't want to run those site links. i hear that from clients sometimes when they're in that situation. but then i'm like, well, it's going to hurt us on quality score and we're probably not going to be able to show up as high or as often. and if you ever want to test out how much ad extensions matter or how much they change things, um, isolating it with um an experiment would kind of be an.

More:Amazon Project Zero - What Is It & How I Used It To Remove A Hijacker From My Product Listing

Google's New Experiments Feature is Awesome!

google rolled out something pretty cool. i'm excited. uh came out today. uh, i'm recording this on january 19th and, bam, there's january 19th. just so you all know how proactive i am. um test with even more ease and confidence with the new experiments page. uh, make easier. so, in order to make testing easier, they're rolling out experiments page. great, um, you can do all the things that we've been able to do with experiments, so it's not like earth shattering, but it is easier. so the process for for experimentation is simpler. um, they've streamlined it a little bit. you get to kind of do it all from within a single dashboard. i'll show you how it works in just a second. um, so that's pretty cool. the thing that i love, though, is this: sync: optimize your experience experiments with sync to make it easier to run valid experiments. we're rolling out a new way to sync your experiments with our corresponding campaigns, and what happens is, if you allow google to sync, then google will automatikally update your experiment with any changes you make to the original campaigns. that is part of my language. badass, now you can- actually you can- run an experiment in in a live ecosystem, because you're learning and if you're optimizing anything you know in any a b test. if you make a change, kill the a b test. well, now you're able to make a change but continue the a b test and, in theory, see the disparity in the impact of that change across two separate campaigns. super freaking, cool, super cool um. notes about custom experiments that i'll include in uh, the description, just so you know how to do it. one quick thing that i thought was really important is: um, where's monitor, monitor, monitor, monitor. there we go. uh, you have to understand the score card, you have to understand the scorecard of the experiments and how to interpret your scorecard. if you don't, or if you're looking the wrong thing or you build it incorrectly- and i'll show you how to build it in just a second- then it's not going to yield much in the way of of anything tangible. the other thing that i'd recommend- and i'm only saying this because i made these mistakes- is when you're, when you're split testing anything. split testing gets very dangerous um, and we don't do it a lot for client campaigns, to be honest with you, because of: for this reason: a, you need a significant spend um, just to make sure you're not looking at, you know, some level of machine learning based confirmation, bias. and then b. you want to look at things like cyclical market changes in the competitive landscape. there's there's more to split testing than the elements that you can control. all of that said, let me show you how this works. so you're going to go in all campaigns, you're going to go down to experiments and then click on all experiments and now we've got our little blue bubble here so you can. there's three types of experiments, but two of them are kind of myopic in view: optimize text ads, video experiments, custom experiments. that's the one that we want. do we want to run an experiment via display or search? i'll say search testing for youtube, just in case somebody in my org freaks out. so now we have to choose a campaign, and we can't choose a campaign with a shared budget, which makes sense, because you're about to split the budget with the primary campaign and the experiment. you, it's going to be hard to do that in a campaign with a scared with a shared budget, um, so i'll just choose the first campaign that shows up, um, and now we're going to, uh, play the nomenclature game, which is fine, um, save and continue. now i have my campaign. so now i'm in the campaign build dashboard, which you recognize, with the exception of this little bar 123.. so we're setting it up and you'll notike that. it's telling me no changes were made. so if i wanted to make a change to this partikular campaign- let's say i want to play with my bid strategy or whatever- um, we can go bam, and then i'm going to change bid strategy and we're going to maximize conversions, um, cool. so now i've made that change and, uh, i'm done so. now, if i want to schedule this experiment, i can tell lord google: hey, this is really important. by the way, this is the goal which you know. conversions: uh, this is the goal that i'm aiming towards. uh, am i increasing conversions? um, am i decreasing cost? let's say so. i'd like to see these two things and i'm not telling you that that's the, the prerequisite. by the way you, you decide what goals it is that you're pointing at and and the direction that you went ahead and then decide your budget split. i'm trying to decide whether or not i'm going to say what i'm about to say. i have a hard time thinking that an experiment that's not 50: 50 is going to be valid. keep in mind your experiment split: impression share may not always be the same. for example, your experiment could have a higher impression share than your original campaign, despite having a lower experiment split. um, if your experiment split isn't 50: 50, then it doesn't. you know, it stands to reason that you could say like, okay, if i do 1090, then i can extrapolate from the data and whatever conversions i'm getting i can just multiply those by. it's the inverse of the, the distributed budget. but no, but no, because google has thresholds, critikal mass thresholds, that if you don't reach and achieve those, then um performance suffers. so and- and i actually maybe want to be policed on this a little bit- if somebody has, if somebody has a reason why a non 50- 50 budget split would be viable from a data perspective, i'd love to hear it. but as a data nerd, i'm going to tell you that i don't buy it yet. i'd love to be, i'd love to be, schooled up. experiment split: you've seen this before. if you've run experiments, google says you want the search base for cookie based, which basically means every single time they search, we're splitting it. or if i've seen this person before, then i'm going to give them the, the secondary experiment. i think this really depends on what it is that you're experimenting um with, obviously. but you know, like for me, i'm going to do cookie base, because i have a higher value lead prospect, longer sales cycle, that types of things, and i want to keep people in the buckets they land in um start and end date for, uh, experiment dates. i've been saying something for a really long time, by the way. i've been saying that if you add end dates to things in google, i think they prioritize your ads. i don't believe that to be true anymore. i think it used to be um really do, but i i believe they they pulled back on that and they've probably pulled back since i've been saying it. so for all those of you that i've lied to, i'm so sorry. um, so you can decide when your end date is um, what the duration is. you don't want to run this for 30 days or whatever, um. and then- this is the cool part- sync. uh, changes made to your base campaign will automatikally sync to the trial campaign. that is so cool, this is so cool. i love, love, love that um. and they've got a whole artikle on sync. you can find that there too. so, and then you'd run your experiment. i don't actually want to run this experiment. so i'm not going to do it, but i just wanted to show you what the workflow looks like. i'm really excited about that. um, not for everybody, but i do think that maybe what we would do internally is: i'd love to test, i'd love to use that to test, um, honestly, new features that google rolls out would be a really good, a really good opportunity. um, i like the, the ease of use, uh, the improvements, uh that they've made to the whole, the whole workflow, and then also this new sync picture is super cool. so, whatever that's worth appreciate y'all watching. i'll see you tomorrow. thank you so much for watching. if you enjoyed the video, please give us a thumbs up. that lets youtube know we actually know what we're doing. we shoot a video every single day, so if you want to be notified, hit that subscribe button and if you have any input, don't hesitate to hit us up in the comments. we'd love to hear from you. we get very little human reaction. thanks for supporting ou.

Google Ads Experiments

hi there, it's Carl and I work at 5x gross and today I wanted to share with you, um, a very powerful feature on Google ads that basically no one uses, and this feature is Google experiments. so, as you can see on your screen, we have experiment summary. but before we dive deep into the real example that I'm gonna show you that we did for our client, uh, I wanna just tok a little bit about Google experiments. so, to get better results from Google ads, you need to make changes to your campaigns, and it's a very obvious statement, and every tweak you make to your campaigns, uh, is literally a mini test. so Google ads has a special feature called Google ads experiments, as you can see, which allows you to properly a B test changes to a campaign, right and like. if we Define what Google ads experiment means, it basically allows you to test the performance of an alternative version of a campaign in order to improve a campaign's performance are very complicated explanation, but I guess it's very straightforward, uh. so let me start out by saying that Google has experiment is overkill for most of the changes you'll make to your campaigns. and what do you mean by an Overkill? well, most changes that addresses make in Google ads are small tweaks, uh, for example, changing the max CPC of a product or adding a new keyword to an ad group. these are small changes and you don't really need to launch an experiment to to understand something or to improve your performance. uh, the consequences of these changes are usually pretty limited. so if you change the max CPC and suddenly the actual cost per click is too high, then you can simply lower it. that's it. you don't have to run a whole experiment about it. uh, but some changes, I feel. uh, but like, um, no, no, the bigger your change uh, the the that's when your Google ads experiments can shine right. uh, because it helps you to reduce the risk by testing an alternative version of a campaign on a subset of traffic. uh, for example, you can send 50 of the traffic to the version using enhanced CPC and 50 to the version that uses Target return as band bidding. but actually right now, uh, like when I've tried to do that, I couldn't because I think Google used to have this feature and now they don't. you basically can change the bidding settings, which is frustrating, but you get the point that, uh, your changes just need to be big and that's basically it. so in experiments you can, uh, do, like Google ads. experiments offers three types of experiments to run, and yeah, you can see it here: optimize tag stats, video experiment, custom experiment. so, in optimized text stats, you can test variations of your expanded or responsive search ads. video experiment is a pretty straightforward and the custom experience experiment- that's the one that we use, like most, most of the time. um, and while the first two are pretty Limited in scope, the customer experiment allows you to make any changes to your campaign, almost any changes. um, yeah, and like for custom experiments, you first need to change pick like campaign type and then you're gonna have your base campaign. uh, as you can see in your trial campaign, as a base campaign, this is your original campaign that will be used as a template, and the next Google ads will create a trial Campaign which is identikal to the original one, uh, with the exception of the changes that you make. so this guy is literally the limit. you can change everything in a campaign, like targeting bidding strategy- no, not beating strategy, not anymore, like it used to be, where you can change the bidding strategy, but not now, uh, but the basically targeting keywords, match type, ads, like whatever anything that you can change, build adjustments, uh, you can do it. uh, as you can see, our experiment has been running for only seven days and but, as a rule of thumb, let experiments run for at least two weeks. two weeks, um, yeah, it's very important. for example, as you can see, we don't have um enough data on our conversions. uh, and that's basically because you need to get at least higher conversions to to have statistikally significant data. but not many businesses can afford that. for example, if you do ads for attorneys in the United States, their cost per click is very high, so you get very few conversions per month. so this kind of experiment would run- I don't know, three months in order to get 200 convergence or even more. um, one way to avoid this is to optimize for mini conversions. for example, a macro conversion would be a sale, but a mini conversion can be when someone left their email right, but, uh, sometimes you basically can see where the trend is going, um, but it's not like very scientific if you just have, for example, what we have right now. we have seven days. uh, we have statik, statistikally significant data about cost and all the other metrics except conversions, but we can see that, uh, like, our conversions have got up by 21 percent and our cost per conversion has did has decreased by 75- 65 percent. um, yeah, and let's quickly check: like, the difference in percentages is the percentage change in performance between your experiment and the original campaign. so if you're not familiar with Statistiks, this might seem very complicated. um, so it basically indicates A variation, uh, like how it was different and it wasn't just luck. you know, uh, you can just click on the question marks and you will get the gist of how it works, and so on. um, yeah, well, as you can see, this is our first experiment with our campaign and we have, uh, inconclusive data, because we didn't get 200 conversions, uh, and the second one, also the same problem: even if we were to let them run for two weeks instead of one week, uh, we still wouldn't get 200 conversions, right, uh? but, as you can see, this uh experiment was not as successful as this one. and uh, actually, if you want to know what the difference, um, for example, base, it's one thing, and in trial campaign we just added, added our Target CPA, for example. our base campaign didn't have any Target CPA and just it implemented maximized conversions, but with with trial campaign, we wanted to see uh how uh Target CPA of like 70 dollars will affect uh our performance. um, yeah, and eventually we implemented Target CPA for this campaign, uh, but for this one we didn't, because our cost and spent- basically the spend- has gone down by 85 percent and obviously we have less conversions. and for this, uh partikular client, it's important to have a lot of convergence, and sometimes it doesn't even matter, like if they're high, as long as they're not above like 150 dollars, because it's a B2B SAS startup and they get like two thousand dollars LTV and yeah, and basically this is a split between the traffic. uh, you can choose whether it can be cookie based or search based. I would recommend to just read this, the Google artikles about it, and start implementing it in your campaigns. so that's all I wanted to share with you today. thank you very much for listening. I hope this is gonna help and if you need a consultation, contact us. uh, we would be glad to help uh. have a great day. bye.

Google Ads Campaign Experiments

if you're anything like me, we're always looking to make improvements on our PPC campaign performance. we're always deciding if maybe changing my bid strategy would be a good idea. what about my ad scheduling? maybe I need to change my bids and see what could try to get more aggressive or less aggressive or something along those lines. so if we're gonna do that, there are kind of a couple ways we can go about it. we can either make the changes to our existing campaign and do a before and after analysis and try and understand if performance has gotten better since we've made our changes. but sometimes that's got some issues. maybe performance changes on a weekly basis and we might be hitting a different group of people. maybe you have seasonality or maybe you just don't really like running before-and-after because there are a number of different issues that can come with that. so instead, what I want to tok about today are Google Ads campaign experiments, where we can create a draft campaign which is effectively a duplication of our existing campaign and then we can make all the changes in it that we want and then run a split test between the control, the existing campaign and the experiment, with all of our changes in it, and just see which performs better head-to-head, and then, if the experiment performs better, we can actually roll out the changes directly, or if it doesn't, we can shut it down, maybe start a new one, or just leave the existing campaign in its place and let it run and get the performance that it's getting. so let's go ahead and jump in and set up a test campaign experiment in Google Ads. so in this Google Ads account I have a campaign that I want to start testing something a little bit different. right now it's currently capped by budget and I want to see if I can get a little bit better performance out of it if I make some adjustments to the ad scheduling. on the campaign, you can set up experiments in a couple of different ways. you can either start from the all campaign section and come down here to drafts and experiments. then you'll see any existing drafts that are currently in the campaigns. you can see campaign drafts, experiments up here and then ad variations, and then you can click the blue plus button to create a new draft and then you'll select your campaign from here. so you just click the pencil and you'll choose the campaign that you want to drop down from, just in case you didn't go this route the first time. you can also create them from within the campaign itself. let's say you're actually in the campaign that you want to start impacting, you're looking at performance and you think maybe I want to adjust the day parting and see that'll help out with the limited budget here. when you're already in the campaign, you can come down to drafts and experiments as well. click the blue button and you'll see that you don't have the option to choose the campaign because you're already in it. so you can create a campaign experiment either when you're on the all campaigns tab or from within the campaign itself. so let's go ahead and start creating our draft campaign. I'm just gonna create a draft name for day parting adjustment, so I know what I'm looking at, what I'm planning to test. you can make this as specific as you need to to make sure that when you go back later on you know what you were testing, and then you can write a different description if you need to, and then just click Save and then it will populate you into what looks like a regular campaign editor. everything looks the same. the only thing that's different is you'll notike up here in the breadcrumb that it will show that you're in all campaigns, then drafts, then the name of your draft- as opposed to the normal campaign- and information that it would show you. it'll also constantly have draft status and tell you what it is and then what the original campaign that that draft was based off of. so now that we're in here, we want to actually create the changes that we want to test in the campaign. that's why they give you all the same options that you have within your regular campaigns editor. so I want to just test the ad schedule. I want to see what it'll do if I turn it off on the weekend. so I'm gonna come over here to the ad schedule section and then I'm gonna start to adjust my actual ad schedule. right now it's set on all days, so I'm going to change it to Monday to Friday, that way it's not running on the weekends. I'm gonna hit save easy enough. now all of my day parting is set up and I've got my ad schedule squared away. so now I want to actually turn this into a campaign experiment. we're only part of the way done. basically, what we've done is we've staged the experiment portion of our experiment, which is not the best way to be able to phrase things, but we basically have this staged test variable in place. now we need to actually set it live. so I'm going to hop back into the drafts and experiments tab and now you'll notike that the drafts went away because I'm currently in a draft, because I'm in the draft. so now I'm going to click a new experiment name the experiment. so choose the name here that makes sense based on what you're testing. but also keep in mind that if you pull any reports that are based on your campaign name- whether it needs to have search included or us included, or if, basically, if you're making any reports on campaign name, you will want to duplicate that previous campaign, the original campaign name, in this new experiment campaign name and then make an adjustment to the end of it. so what I'm going to do is I'm actually just going to duplicate the name of the campaign and then I'm going to write day parting adjustment on the end of it. so now, if I pull a report, I'll be able to search for the campaign name itself and it will include the control and the experiment, and then the day parting adjustment after piece will be either included or not, depending on the other adjustments I make to my report. that's just something to keep in mind. I can then choose the start date and an end date if I want it to have one, or I can leave it as none. if I want to run as open-ended. then you get to choose the experiment split. this is where you choose how much volume you want to go to your test campaign. first, the control. if you're very sensitive to changes in your account, I suggest that you make this something closer to ten percent, going toward the experiment, maybe 25 percent, but if you're pretty comfortable having a 50/50 split, that's what's going to give you the fastest return on results to know whether the adjusted day parting is actually improving performance or not. it means that you're going to get the most data through both variables as quickly as possible. and then the next piece is just around search based or cookie based experiment split options. with a search based split, that means that somebody could see the control version of the campaign on one search and then if they come through on the same device, even in the same day, doesn't matter. if they do a similar search and find their way into your campaigns, they could go to the control or the experiment, and in my experience that's just not really the best way to do it. I would suggest you pretty much always use the cookie based experiment to make sure that you have everything squared away. then just hit save and you'll notike that it is creating. so it's going to create the experiment and make sure that it's ready to go. once that campaign experiment has been created- since the start date is the day that I created it- it will go live and start splitting the campaign data between the two different line items to pending on which campaign people are put into based on their cookie: either the control or the experiment. so if we go back to the all campaigns tab will now see that I have both my control campaign and my experiment campaign as two separate line items next to each other and this is how all the data will come through and this is where it makes it so easy to compare the data of t.