S12E01 Testing 1, 2, 3 with Joel Meador and Charles Suggs Owen: Hey everyone. I'm Owen Bickford, senior software developer at SmartLogic. Sundi: And I'm Sundi Myint, Engineering Manager at Cars Commerce. Owen: We are your hosts for today's episode. For episode one of season 12, we're joined by Joel Meador, a staff engineer at SmartLogic, and Charles Suggs, a developer at SmartLogic. In this episode, we're discussing software testing. That includes everything from tools and underlying philosophies to personal experiences, trauma, and creative problem-solving strategies that keep our projects on track. No one's traumatized, right? So welcome Charles, Joel, and hey Sundi. It's been a few weeks. Joel: Hello. Charles: Hello. Owen: Should we also say hello to YouTube? Sundi: Yes! This is our first foray into the video world. So, for audio listeners who are not seeing our hand motions, we're on YouTube! Owen: We'll have a link to the video, I'm sure in our description. for the podcast. If you want to see the actual video of our episode, this is the inaugural video podcast for Elixir Wizards. We've done a little bit of video before for live streams, but this season we're kicking off our podcast as video. S,o you'll still be able to get your Elixir Wizards on your favorite podcast player. But if you want to get the extra, just the extra go to YouTube and find our video podcast. Joel: no shadow puppets on the audio version. Owen: Wait till next week. Now, I'm going to add shadow puppets to my orders. Joel: Put it on the agenda. Owen: Yeah, so this season, we're actually talking to internal employees at SmartLogic to cover topics about the way we work, problems we've solved, challenges we've run into, and that type of thing. And we're happy to have Charles and Joel on to get us kicked off for the season. Let's go with Charles first. You're the newest member of the SmartLogic team on the call. Charles: I'm Charles. I, uh, I've been working as a software engineer for over 14 years now. started back in a dark world of PHP from a visual quick start guide while I was recovering from surgery and eventually started a workaround cooperative with a couple others over a decade ago now, and that wrapped up. Owen: A couple of years ago and, now I'm happy here at SmartLogic writing Elixir. So, you're recovering from surgery and you're like, you know what? I need more pain. Charles: or was it the pain meds? Was it, what were you doing reading PHP? prior to that I had hiked the Appalachian Trail and graduated college with a degree in journalism. Owen: Okay. Charles: Yeah. Owen: And, uh, Joel, where do you come from? And who are you? Joel: I come from Kentucky, but I'm going to tie this all back together. my name is Joel Meador. My pronouns are he, him. I've been working professionally since about 2002.My first job after college, actually my first job in college, was testing software that did 3D imaging, which in 2002 was pretty hard and the computers were not very fast. so we do a lot of testing of that software to make sure the 3D images we were getting, or the 3D points we were getting out of the scanning software worked. my second and third jobs were also software testing for the most part. my first job, After college was writing an automated, usability or not usability, but, 5 0 8, what's the thing I went accessibility testing. So I had an automated accessibility testing framework that I maintained and wrote, that had a government website. So that was my first job. And then I've been testing in all kinds of different ways since then, hardware, software, people, et cetera. and you forgot the most important thing that I always tell people when I, when I tell them about you, Joel, is that you're a professional tree knowledge-ist, cause that's a word. Tree knowledge is just, I have, I have more, more tree knowledge than maybe is useful, but I do have a lot. Sundi: I am going to take us right into our group philosophy behind software testing. so Charles, I think out of everyone on the call, I think I've talked to you the least, and that is five minutes straight. can you tell me a little bit about your philosophy behind, testing and software development and how did you come to that philosophy? Charles: my first introduction to testing for software was by way of, Bob Martin's clean code videos. I was working somewhere that we all had to watch and discuss them. And, he takes a very TDD test driven development approach with the. He calls it red green refactor, where you write the failing test, and then you write the code that makes the test pass. So you go from red to green, and then now you can safely refactor your code, check that the test still passes, feel good about it, and move on. So that was kind of the basis for me. And I try to do that when possible, but I find that oftentimes I have to work through a bit more of, what's going to happen before I can write the test, and it can be nice too to add into pair programming where one person writes the test, then the other person writes the software to pass the test, and you can actually come up with better tests that way because, well, you can work around the test, right, make the test pass, but it doesn't actually prove that the code is doing what it needs to do. Sundi: You're throwing me back to college because I think I had like automatic red green tests that I had to, my homework had to pass the tests that the professor set up. I will not tell you the percentage of times I actually succeeded, but um, anyways, that's actually where that took my brain, so that was interesting. Thanks for sending me back to college, Charles. Charles: Yeah. Uh, nice to wrap, to wrap that up. I think it comes back to, for me, if I feel like I have quality tests that cover, enough of the code and the functionality that there's a good chance we're going to be able to ship something that doesn't cause a problem. Owen: Right on. And, uh, Joel, you talked about accessibility testing, your staff engineer. So I'm sure you've written, I know you've written some Ruby and some other languages as well. what are some of the frameworks or patterns you've seen come and go over the years? Joel: It's been remarkably stable because, I think in the late 90s maybe, early 2000s, we got the in-unit style testing, right? JUnit being one of the earlier ones, which is what I started with. and we see that a lot, like, the stuff we do in Elixir is ExUnit, right? um, uh, Erlang has its own thing, which I don't remember the name of, and Ruby is, Our unit used to be a thing, but they changed everything to minitests some years ago in most communities, in the Ruby community. PHP has its own PHP unit. everyone does this thing, so it's kind of a standardized framework that came out of the Agile manifesto and that stuff. I don't really remember when that was, but it was pretty early in my career, if not right before my career, started. Owen: so, I'm a newer programmer. I've been doing this professionally for about four or five years now, but I've been writing code, not testing code for that long. so I think the recent kind of improvement to like engineering, the industry is that these testing tools are. More and more built into languages, newer languages, at least. So, like Go, I think, was maybe one of the first ones to introduce built-in testing tooling, and then Elixir rolled ExUnit into the baked-in Elixir binary package that you get installed. Joel: I think I'd fight you on that one. Actually, this stuff goes way back. Smalltalk had built-in testing. I don't remember some of the stuff before Smalltalk. Smalltalk's really a like linchpin technology change, like sea change in how a lot of people thought of stuff. And it was built on other stuff too. I'm not going to say it was like, was wholly unique, but, it really Codified and brought together a lot of ideas in industry to that thing. I don't think COBOL had like asserts or whatever. Maybe it did, but I'm not smart enough to talk about COBOL really. but there's a lot of stuff that preceeded, like all these things, it's kind of built together, it all kind of looks the same. and Ruby like predates go and Ruby had asserts and stuff built in. Like the framework is there. It's all in the standard library. So, there is this stuff that predates, like mass adoption, We start seeing mass adoption of like testing frameworks, testing stuff, like, in the late aughts, I would say, is like, achieved mindshare to use a term. That's what I saw. Like a lot of this, like the Ruby on Rails and the stuff that preceded it, like all that is pushing this like testing ideology, this testing frameworks, like as a first class thing, you see Bob, uh, Uncle Bob doing that stuff about the same time. A lot of that's coming out of like getting a four like design patterns, uh, stuff like is tied in pretty strongly to that movement to like, there's all these kind of like, what if our software didn't suck kind of movements and testing was, was a, a leg of that. So anyway, that's old man screaming at the clouds, Sundi: I'm looking up the history of software testing, and it says the first dedicated software testing team was formed in the late 1950s at IBM. interesting. you know I'm a huge fan of history ofs, history of emojis, history of, testing could be a really cool thing. Kind of a thing to hear more about. Joel: You need to get an expert on the Apollo program history. Good. Let's figure out how they did testing. Cause I know they did. Sundi: I hope they did. Owen: That's a good kind of transition point. So like, why do we write tests? Is that too broad a question? if we can just write some code and kind of see that it works, in the browser when we're writing the thing, why do we need tests? And Charles: I think there's multiple layers to it. I've spoken before mostly about, testing your code, and there's various kinds of tests there, but there's also, testing with users and end users, with usability testing, so I think, the reasons we test are not just so the software doesn't suck, but it's so that the software continues to work as expected should. We can move more quickly, but also so that what we're building is what the users actually are going to use and want, and that it solves a problem or make something better, as opposed to worse. Owen: I think it also helps us know that things are working as expected. So there's making sure that yeah, for user testing, the users are satisfied. If we're launching something into space, maybe we write some automated tests so that we know that all the subparts of the system are working correctly, given some known inputs. Sundi: There's also something that kind of comes to mind Since I manage engineers of multiple levels now, one thing I've noticed is sometimes the more experienced engineer, while they might like work making tests first, sometimes they do, sometimes they don't, that's debatable in the industry, but, what I do notice is that even if, someone can just sling code and get an idea out, a lot of like more junior and mid engineers might think about, I'm not exactly sure what to do here, so I need to figure out what it looks like. And so they'll write their test first as a way to help them get to the end solution of what they're writing, and that I've always found is like an interesting technique as to, leveling themselves up and just getting to the goal of whatever function it is at the time. Joel: I'm glad you didn't make me to be the person defending TDD, but yeah, TDD. , you write the test first so that you know what you're trying to achieve. Right. And I think that's a really important aspect of like testing you walk into a manual or an automated or any kind of tests. Without a goal in mind, then what are you doing? you're not going to come up with a test, But that does anything, potentially. Yeah, I'm not test versus is nice sometimes. if we have time, I might just, I might talk about an entire application that we wrote test first, that was very interesting, but it's a story. Owen: Quick poll, who, who writes test first and who writes test last and who like changes every other day? Charles: I do both depending on the situation. and after Joel answers, I have a, in addition to that and what he was saying a moment ago. Joel: I don't write code, so I don't know. Usually test first, though. Owen: You commission test. Joel: I don't know what order they're written in when I get to it. I'm just looking at pull requests. Unless I'm looking at commits. Sundi: Joel, I was gonna make the same joke! You stole it! Joel: you're welcome. Charles: So I think this hits up against another benefit to testing and that is isolation. So when we build more complex systems, it can help to isolate what you're working on as you're building something new. Maybe you're isolating from a third-party system, an API. Maybe you're isolating from a database, and so you can mock up the responses from that. Let's use API as an example. Hit the API with some method to get a bunch of different responses back. You know what they look like, maybe their docs are really good and they explain that for you and they're up to date. now you can use that to write tests against the API and get a lot of work done without necessarily having to You can hit the API as you're testing. Maybe there's a rate limit you've got to worry about. all kinds of reasons why we might also want to use tests to develop in isolation and more quickly. Owen: one challenge I face is, How robust, , how many permutations of a,, I think most of us on this call have been working on a very kind of sprawling application with a lot of complex complexities. Let's just leave it at that.for any given feature, or like LiveView, if we're writing a LiveView test. There's a, there's a range like from zero test to just make sure it renders to make sure every possible type of user is able to do every possible type of interaction on this LiveView. What are your thoughts about finding the right balance in that spectrum of verbose the test should be? How Joel: I think we have, a lot of other concerns there. particularly when you're doing consulting work, you've got a lot of budgetary concerns around making perfect tests or making perfect tests. Like any kind of tests, really, a lot of times, like it should be factored in, but you never know what you're going to run into. and it gets exponentially harder when you have lots of user types, which we do in this app that we're talking about. if you're on a product team and you're working at a slow pace, you sometimes have more luxury, Around what you can do than you do when you're in a, . Drop everything consulting kind of situation. so I think that's just like the reality of the, work we're in right now. Joel: The end of the day you want your test to be something you can read and tell what the hell it's trying to do. Right. So, you can, you can make the most arcane tests that passes, , that no one else could ever read. Basically writing Perl or something like Bad Perl, not good Perl. There is good Perl out there, but you can also write something that's beautiful and impossible to maintain and is brittle and breaks every five minutes. there's a lot of, of nuance to that question and I don't have a perfect answer for you. There are just a lot of give-and-takes around where you stop and where you start tests. Sundi: The guidelines that I tend to give people On my team, at least, and, I do work on a product team now, is to do the happy path, the not happy path, and an edge case, at least one, that's probably not enough, but that is at least a good starting point, and that usually once somebody has the first three, they will have a really good, understanding of what they've covered, and then they, then their brain starts going, oh, well, If I did this edge case, well, that edge case is really obvious. And then, usually, on average, they might have three to five tests, at the end of it. And we can expand on that if we want. Usually, that's a good spot to start. Charles: This project that we're working on is probably the most front end set of tests that I've written. in the past, it's, the concentration has largely been on unit testing. and, I think, I don't know. Eight years ago or something, a number of the people I was around at least, we had spent a bit of time working with Protractor and some other front end testing tools. And this was in the context of some JavaScript applications and PHP applications and similar that we found the test to be so brittle and time intensive to maintain that they just weren't worth the benefit. And so we threw our efforts more into unit testing and integration testing where it made sense. Owen: I've been learning throughout the process of this sprawling app is, yeah, we're writing a lot of, I guess what we call integration tests. They're,they're giving us the code coverage because we're testing a LiveView, which kind of runs a bunch of module functions in the background. So those module functions get to count as they're being covered. so it helps us with our coverage. The coverage is not like the gold standard when it comes to how good are your tests. You can have a hundred percent coverage and have a lot of bugs, right? The other thing I'm thinking about is we, there's just there's not really a line. There's like, where do you decide which parts to test is I think the fascinating problem we solve. So we have all these LiveViews and we're rendering all these components. And I think that the lesson I'm learning is that what is rendered inside the components should be in a component test and not in the LiveView test, for example. So that way our LiveView test could. Probably be a little bit simpler. They could assert that some elements are rendered if, certain conditions are met and then inside of the component test, then that's where we test values and rows and columns and stuff for talking about tables. Charles: and focus our efforts on what's critical. what's really important to show here with this content? Is it the people? Is it the data about the people? Is it the title? Is it just to make sure there's no clickable elements because it's a print view? Charles: I think it's a good skill to have to be able to look at a set of tasks in a larger thing and see common threads and how you can more efficiently solve those, but I feel like that's more often than not, people are seeing that through the rose colored glasses at first, and by the time you get in there and get into the details, that, it can be just as hard to maintain one thing for all cases than to maintain a separate thing for all cases, depending on the details. Sundi: The one nice thing about incorporating testing into the Agile process is, it's just part of your ticket. an acceptance criteria item of a ticket might be to include testing. It might, I guess I'm not as familiar with other forms of, software engineering process. I think Kanban is the other kind of traditional one. I guess that still has tickets, so it could still have acceptance criteria. I'm not sure. In terms of testing, I feel like it does help highlight, that, and especially validation steps when you have multiple environments, if you check things in, and also when you have to like make sure when it goes to production, checking it against the various different production things. Owen: If you have feature flags or A B testing going on, how do you even test for those things? It's just like, it's, you're able to do it all in one place. It's kind of interesting, the whole world of it. I'm thinking about the Agile part of the question here. I think the alternative to Agile that I hear most commonly is Waterfall. Sundi: hmm. Owen: Um, I don't know what the other options would be, but it seems like Waterfall would dictate that you write your tests through your process because you're, you write like stair stepping into the next thing always. Whereas Agile, I think it's a little bit easier to neglect or forget to factor in testing.if you don't, if your team doesn't have that habit built in of like, we think, we think it's going to take X amount of hours to write the code, to design things in Figma, whatever, it's easy to forget. Oh yeah. We actually need to like spend one to four hours testing. It might double the amount of effort because you've got to consider so many different factors of the code. And you might find bugs once you've written those tests. So,, are there other impacts to the way that you're agile versus waterfall or some other methodology kind of impacts the way you think or budget for testing? Joel? Joel: Yeah, because if you're talking about,an old school waterfall where you've got an entire testing organization around something,they're a very real line item, in your budget. And having them sit on their, on their hands, For a month or two months or however long, while you spin up the rest of your processes, like it was extremely expensive. if you're keeping around now, I understand that like companies don't actually operate that way anymore. Everyone runs like super lean most of the time. So like, this is not a realistic thing most of the time, but you know, those, those testing organizations, those testing people, are like a pretty huge line item, particularly if you have an experienced team. And we know that cause we have QA folks internally now, or have for at least a year. and like when they don't have something to do, that's, that's pretty bad. they're bored. We're unhappy. We want them to have something to do. We want to get feedback, et cetera. Um,I think in the agile like world, that's why people would push TDD is like, you're not going to forget. Testing, if it's literally the first thing you have to do to like produce working software, right? Like it's just expected to be a thing that you do first.I think that's, one of the, one of the reasons I like push that as a philosophy. I don't know if I answered your question. I'm bad at remembering the questions, man. Owen: This is kind of a nebulous conversation we're having. We're touching on testing automated versus manual versus integration, agile versus waterfall. So it's a little bit of all over the place for sure. Joel: Maybe I give you a little, a little hint of an old story about some testing I was doing. I used to work at another consultant company a long time ago, and one of the products that we managed was physical, uh, switch hardware. So, if you're in a data center, you've got these big racks of, Like cat five, cat six, whatever they were using at the time. plugs that you plug things into. So we, my company managed and wrote, smart switching software. So windows program, big rack of plugs for cat five, And if you plug something into a hole, then your software would say, cool, you plug something in, and then you plug in the next end, cool, you plugged it up over here, these things are connected now. as you might imagine, that's pretty complicated.we had a name, a guy who I will not name. He wrote firmware for the smart switches, and let's say he was a little loosey goosey, sometimes, about some stuff. and we supported international markets, we were in Italy, France, all over the United States, some other parts of Europe, I don't remember anymore. Anyway, we had firmware in all these different languages, they all had, slightly different software, it was a mess, the hardware had all kinds of revisions, so. the fellow that was writing the software, sometimes made mistakes, but we would get the software like day of needs to go out to France. Right. So it's like, okay, run through your whole, your whole thing today. Impossible. Like the number of steps to fully test this thing is like in the hundreds. We had three levels of testing, right? We had does it pass the smell test? can it do the basic things and then like fully test the software, right? Fully test the software and hardware. and we had to test the firmware against multiple hardware revisions of the like plates that sat in front of these things. So it's a whole big mess. And you can imagine if you have sloppy non tested firmware coming out.no one, literally, dude has just written it, compiled it, sent it to us in email, right? That's, basically what we were getting. That's very expensive. we're talking, like, uh, we have two, two full time, senior engineers. Maybe three. working on the software side to make sure the firmware works. We've got three full-time, QA people working. So like this person not ever testing their stuff ever before they send it over. extremely smart guy. I loved him. He was fun to be around, but like this software often didn't work. Wouldn't work. Sometimes it did. It was field tested. It did work most of the time, but sometimes when there are changes, it just didn't work and then we're like, okay, we've just wasted like, six people's two days of their life, like just doing this thing, it doesn't work and it bricked this hardware. So we have to go like reset that and all this other stuff. So that's kind of the downside of not testing up front. Doing a waterfall development If you're not doing that developer testing that like basic QA or don't have any, even a framework for it, you get into real problems. Cause like the expenses down the line are extreme. And then you can't ship that software out to your customers because you don't have a dependable delivery timeframe. there's all these things that happen. anyway, real world experience for me with this kind of, uh, waterfall stuff was, pretty uncomfortable a lot of the time. Sundi: Charles, do you have any kind of interesting, not doomsday stories? I wouldn't call Joel's story a doomsday story, but like any interesting, oh, we didn't test it, oh crap, kind of moments in your career. Charles: There, there are probably some stories that I've shoved from my memory. but I'm trying to think through some, something where we're like, Oh, we, we really should have tested that first. I know I've broken production in the past when we weren't using tests and we were under a deadline and something broke. and. You know, The Power Was Out, like, one of these kind of things. But I don't remember the specifics anymore. Owen: I think it'd be good to let our listeners off the hook just a little bit. if you're listening to us talk about testing, you're like, well, I haven't been writing good tests, whatever, whatever your, you know, anxieties are about testing. It's okay. Testing. it's easy when you're like on your new Phoenix project and you've got, two or three, controllers or LiveViews or whatever. But once you're working in a production app with a bunch of users and a team of people and, all the kind of external feedback that happens with that, just gets really hard. Sundi: The tests get old, they get flaky, they get stale, they build up in your CI pipeline. You Owen: And that's just the code test. So there's also, uh, complexities around, the environment that the test is running in. So I watched a talk from, I think, CodeBeam.which I'll drop a link in the, in the show notes, but from Peter Hastie at Bleacher Report, talking about how they had done all these, database migrations and changes to their database. Everything worked well in development, CI passed, everything was beautiful in staging, everything was working exactly as expected, and then they deployed to production and. People aren't able to sign for 10 minutes because oops, we forgot about like replication. so your production environment is different from all those other environments because it's got replication and that, you know, it's just something you it's something you wouldn't really think of in your test. Right. Cause it's just not, you don't think of replication in your test. So it was one of those kinds of gotchas that I hate it. Cause like, it happens to the smartest engineers too. It's, there's so many factors you have to consider. Charles: That's when it's nice to have staging that is just like production, with the same replication going on and everything, so that you can maybe catch some of that stuff if your tests weren't written for it. Owen: The only downside is then your, your entire operational cost is, well, your, technical deployment cost is double, right? Charles: Yeah. Well. Not necessarily, maybe you don't have as many nodes that you're replicating to, but, yeah. so I, I remembered very briefly, there was a time when someone connected the production data to the production database when they were running tests. And that was Not a fun night, but I managed to like sort through and write a script that would basically go and clean up the production database, because with the timing, with the backups, restoring from a backup was going to be messier. Sundi: I've seen similar stories or a similar situation and to avoid all details, I'll just say they, they purchased a certain amount of dollars of thing in production instead of in the test environment because they ran a script to load test something. You said that and my heart sank into the ground and I was like, why did it do that? Hold on. And then I like backtracked and I remembered. Charles: It was not I who connected to the database, let's be clear. Sundi: and I definitely never worked with a person like this or this never happened in a place I ever worked at. Owen: Right, it wasn't me. Charles: I think there are limits to like, to testing, right? Things come up that Yeah. Joel: can't adequately like test. I'll give you a little preview of something that happened on a team I was working on, which is he ran out of 32 bit integers. which you've probably not going to test for you're just not expecting that. So, dealing with that, and dealing with the eventual consequences of converting everything to 64 bit, was a huge testing and lifetime journey, think we ended up with 25 shards in production before we got the 64 bit conversion done. it was a whole, whole nightmare. that's not the only project that's ever happened on either, I've got like multiple stories of that sort. On high growth products. but there's stuff like that you can't test for. Like it's really hard to test. it's easier now than it used to be, but it's really hard to test, what your, environment automation is going to do without doing it. And sometimes that can be extremely dangerous and extremely expensive if you fuck up,spoken as someone who has done that thing poorly before, sometimes you just don't do it right and then bad things happen, and you have to fix them. So there's a lot of stuff that it's really hard to test and maybe not even reasonable to test, particularly given Every, software context is usually pretty different. Even if you've seen similar patterns before, right? even this app that we're the three of us, other than Sundi had been working on, like it has enough different stuff from other stuff I've worked on. It has its own challenges that I haven't seen before. I've seen things like it, but not this particular one. yeah, just, you're going to get something new, every app. And learning to get the value out of the tests, like learning how to do that generally is really great when you're working on something new, you haven't seen before and getting something you can like share with people when you write it. even if it's here's the manual test steps to do this thing, like having that as an ability that you can bust out is super valuable. coming from me, from a person who did manual QA for like five years, in addition to doing, development at the same time, having those steps really helps a lot with being able to write clear tests too, because you know what a clear test plan should look like. If you have to share it with other people and talk about it, Really a lot of times your tests will look like a clear test plan. If you've written it in a way that can be helpful for other people. And I don't get that a hundred percent. I don't keep it like 50%, but it's a skill that I have tried to take with me through all my, my career, Charles: The human element seems particularly hard to test for. Owen: yeah, ,that's the wildcard in any project, even a one person project, right? you know,we learn, we change every single day and. So why all of us have looked at code that we wrote a month or six months ago and said, what was I thinking? Joel: not me. That's never happened. Owen: Except for Joel. Sundi: I think one of those real world usage examples, that's it's not exactly hard to test for, but I do think of this a lot as like load testing.Charles, you were talking about having like an exact replica of prod in staging. And while you could do that and you could also simulate the load you see in prod, it can be a challenge to actually do that. So sometimes we'll see stuff out in the wild, and we're like, why didn't they test that? How could they not test that? , but it was like, the load was this tiny amount in staging, in prod it was this much. So then, of course, these were some things, these were some boundaries of our limitations that we just didn't know about, and then you adjust your load testing, and then you move forward. That's like a easy kind of gotcha in the World of real world usage. Joel: We won't name the client. Owen, do you want to talk a Little bit about the client we had last year where we've discovered how slow the integrity checks on the database were, we just discovered in production. Owen: I have scrubbed that from my memory. Joel: Great. Owen: I, No, one thing, so, well, okay, so I was debating whether or not to think through this on the podcast, but one, one pattern that I saw in that project that I've been kicking around is should we consider, or even try this?so this was a different client project, large application, umbrella app, large database. Very complex. one thing they had done is they'd use EctoDump so that you would, dump not only the schema of the database, but they would also, I think, do a Postgres dump of all the data or at least a subset of the data. So you would have something kind of like production on your dev machine. Downside there. So like, I see both sides of this, this is great because it gives you a much more kind of real world. Like view of the application you're interfacing with, which can be useful both when you're developing and when you're testing downside is that it could be gigabytes. So like just that process and maintaining that and doing snapshots or whatever is expensive, both in terms of engineering and also just time that it takes to Get those, those artifacts to everyone's machine, but on the project run now, like it has, it has, I think, sufficient complexity that I'm wondering if like factories are really getting us everything we need. so I'm kind of like thinking about alternatives for testing all the permutations of data that we have and whether or not in the future, a semi production clone, with some scrubbing would make sense for this type of app. Is that something you've seen work well in other environments, or is that something that always blows up? Joel: We have that for another project for the same client we're working with. So it works pretty good in development, at least. the scrubbing process is like, but yeah, it's really valuable to see just like, What kind of weird stuff has been entered by people, just for testing purposes or just like as part of their normal work, it's really hard to predict how people will use something. Right. And it's really hard to replicate, like 10 years of use of an application, in this case. So it's I could write factories all day, but it doesn't mean I'm going to get like anything similar to the data loads. That we'd see in production. Charles: My former company we had a, semi complicated, Vue. js application that We would use the production database for testing, both in staging and our local development. But we wouldn't, we didn't set up any kind of automated process to grab snapshots on a regular basis. It was just periodically when it felt like it a good time to update those databases, we would just do that. We had some shell scripts that would handle that, but it was initiated manually. But that worked pretty well, and I echo what Joel is saying about some of the weird stuff, about the things that users put in production. Owen: it wouldn't quite give us that level yet. Although it's complex and sprawling to a degree, it's also not been out in the wild yet for a lot of time. So there won't be like a rich set of production data. so I'm thinking this is something we might consider, six or 12 months from now. Yeah, we're talking about testing. So why not? as we're wrapping up here, I'm curious if you've seen tools in other languages that we're kind of missing from Elixir, I'll Joel: I think there's a lot of stuff out there in the testing realm that doesn't really exist. Java has Java, probably Python these days too, has Java has a really rich history of being used for testing things. So you've got all these like load testing tools written in Java. You got all these penetration testing tools written in Java. You have that stuff in C and C too, a lot of times, at least historically. So I'm trying to think of some of the names of them. I'm just like completely failing to remember anything. But, I used to do load testing as part of my job. several times I've done that as a job.I've seen stuff in Elixir, but I haven't seen it work very well. Um, Like, okay. I need, I need my like little fleet of laptops to go and hit this website like a million times in the next minute. Okay. how do I do that? I could write a script, right. in Elixir, but like there are tools that orchestrate this stuff. Owen: there's a classic Phoenix article from Gary Rennie about hitting the,Phoenix server with 2 million concurrent WebSocket connections. Is that kind of the same realm? Joel: similar realm, but like,it's not quite the same thing, Real world payloads are really different than like I opened a web socket and held it. Those are really different load types. Like processing data is very expensive, right? Owen: that was almost 10 years ago. Wow. Joel: Yeah, it's been a long time. Owen: So we'll drop a link to this as well, but yeah, I think, I think I've seen a couple kind of revisits where people come back to this type of test and kind of do a, another take on it. I vaguely remember someone doing a test where they, maybe it was this test where they had all these connections. They also sent out a Wikipedia article. It was like a megabyte or a few megabytes. And maybe this was like an early days LiveView and they were able to see. That propagates all the connections on PubSub, within a second or so.so we've definitely got some, capabilities that are really special in Elixir. I think tooling for testing those capabilities is a challenge though. Sundi: We touched on it as well, but, just for Charles and Joel, what would you suggest to somebody who's starting out their career or in software engineering, or maybe they've gotten through a significant portion of their career without testing? I've seen that before. how would you suggest they get into the mindset of getting into more testing? Joel, why don't we start with you? Joel: that's a good question for me. It's all about figuring out what the end result is that I'm trying to achieve. From there I can usually find a starting place and what I think the middle should be. so in classic. Testing terms. That's, you know, assertion expectation set up, right? Those are the things in backward order. So I do that when I'm thinking manually, I do that when I'm thinking like automated. as long as you're starting with those three steps, you know, okay, here's what I need to do to set this up. This is what I'm expecting to get. and this is like how I get there. , Joel: It's a little story. You just can tell every time you want to do anything. If you want to click a button and see what happens, like that's a story. If you want to like create a whole gangly of a whole octopus of data and then make sure it all orchestrates properly and gets switched over to AWS and then pulled into an S3 bucket and then displayed on your website. That's also a story, it's just more complicated. So those are both kind of the same thing in my mind. They just have different levels of setup and different levels of context that you have to understand to make sure they're working. short answer. Charles: add to what Joel said, life is short. Don't. Waste time writing software that doesn't solve the problem. Don't waste time fixing problems that made it into production because you didn't write tests for it. Don't waste time taking a long time to adjust a feature or add a new one because you don't have tests for the existing code and features. And the stress is not worth it when you're deploying to production. You just don't know what's gonna happen! Uh, yeah.get a method and a process and stick to it. Your future self will thank you. Sundi: You started with life is short and I was like, Oh my God, is he going to say life is short? Don't write tests. I was like, where is this going? Charles: Friends don't let friends write tests. Joel: talking for 58 minutes. He's like, now comes my ultimate plan. Don't test anyone. Joel: You heard here first. Sundi: Yeah. No, I hear you on that one. you definitely want to have some level of assurances and security and just You should be able to, like, the joke is like, don't deploy on a Friday. Right. But like in reality, you should be able to, you should be able to deploy on a Friday and feel confident that your code will make it there because it has been tested, has been quality assured. I've never used that word as a verb. I will stop now. Owen: Rest assured, it's okay. Sundi: Oh, here we are the end of the episode. Owen: Hang on, I forgot to Google Pokemon puns. Sundi: that was last season. Owen, we can move on. Owen: you know what, I'll let you, I'll put Pokemon puns to bed. We'll figure out something new for the season. Charles: There's one thing we didn't say about testing that, is testing as documentation and as communication to other developers. as a way to, if you're looking at someone else's code, a library that you want to use in your project or you want to learn from, check out the tests. This comes back to, you know, advice to new people too, is the test can often be an explanation of how the developer thought the code would work and what problem they were trying to solve. And you can better understand what's going on and how to use it, and maybe how to solve a different problem that you're working on and referencing other code to figure out how. Owen: I've got good news. I just. I just ran a test on this episode, we have zero failures, 100 percent success, so I think we've done something right, and we don't have to fix anything, yes, alright, great, this has been a great episode, I think, yeah, ship it, we are done, skip CI, just straight to prod,, Joel: He was a bad influence on these, these. Owen: Uh, so yeah, any final plugs or do you want to point people to social media or project or any cause that you're, you want to support? We'll go with Joel first. Joel: if you live in the Indiana region, there's a local Indian tribe, the Miami Indians of Indiana. they've been trying to fundraise for a new place to be for a while, so throw them a few bucks. It's 501c, so you can take it off on your taxes if you care about that. And if you want to follow me on my Tumblr, it's joelmeador.tumblr.Com And there's a link to my GitLab there, which doesn't have much in it. And then Charles, Charles: Not very active on social media, you won't find me there. Owen: you can find Charles on google. com. All right. Awesome. Well, thank you both for joining us. Thanks for everyone who's watched us on YouTube. We can't wait to have more episodes. This has been really fun. And thank you Sundi for being a great co host. And we'll be back next week with more Elixir Wizards. Yair: Hey, this is Yair Flicker, president of SmartLogic, the company that brings you this podcast. SmartLogic is a consulting company that helps our clients accelerate the pace of their product development. We build custom software applications for our clients, typically using Phoenix and Elixir, Rails, React, and Flutter for mobile app development. We're always happy to get acquainted even if there isn't an immediate need or opportunity. And, of course, referrals are always greatly appreciated. Please email contact@smartlogic.io to chat. Thanks, and have a great day.