15 ways I used AI to save me more than a month of work in 2024

AI applications on various devices.

ZDNET

The past year has been a big year for AI, and we here at ZDNET have been documenting it through multiple articles per day, across our entire editorial team. I’ve generally written a couple of articles each week on one aspect of AI innovation or another. It’s been truly exciting and fascinating to explore.

But what AI do I actually use? That’s really a two-part question. There’s the AI I use because I try everything I can get my hands on to analyze and deconstruct it for you. Then there’s the AI I use to increase my productivity, shorten my workflow, and save time. Those two categories are not the same thing.

Also: I’m an AI tools expert, and these are the only two I pay for

For example, while I often test generative AI to create text, I never use that feature for any of my articles. ZDNET does not allow content generated by AI except when it’s used to illustrate how AI works. 

Also, to be fair, I’m a professional writer, so I don’t need AI to help me craft words. That’s not to say generative AI for text generation is bad. It can open doors for people who aren’t practiced in writing, helping them share information and craft letters, reports, and scripts. It can also be a huge benefit to people who struggle with a language they’re not fluent in or for those with other writing challenges.

But I have adopted AI as part of my productivity stack, and it has saved me considerable time. In this article, I’m going to show you 15 ways I used AI in 2024 to increase my productivity and streamline my workflow. I’ll also end with a bit of a wishlist for 2025, looking ahead to ways AI might prove even more helpful in the future.

Also: If ChatGPT produces AI-generated code for your app, who does it really belong to?

Since this is my last article for 2024, please accept my best wishes for a wonderful holiday season and an excellent New Year. I’ll see you on the flip side. And now, here are 15 ways AI increased my productivity in 2024.

1. I used AI to help me program

I found that ChatGPT (and not so much the other chatbots) was very helpful when coding some of the more rote parts of my projects. I used AI for common knowledge coding (API interfaces, for example), for writing CSS selectors, for writing regular expressions, and to test regular expressions, among other things.

Also: 25 AI tips to boost your programming productivity with ChatGPT

I’m pretty sure my AI use saved me about a month of programming time during the year.

2. I used AI to explain what a block of code does

A lot of modern programming involves working with code other people wrote. A lot also involves working with code you wrote years ago and don’t recall the details.

Also: How ChatGPT scanned 170k lines of code in seconds, saving me hours of work

A few times during my coding, I copied a block of code, pasted it into ChatGPT, and asked the AI what the code does. Usually, the AI will not only tell you what the code does, it will also break down the sections to help you dissect it.

3. I used AI to help me debug code

As I mentioned, the AI will tell you what a block of code does. That means you can feed an AI a block of code that might not work. I found that I could feed code and error messages to ChatGPT, and it would identify what I was doing wrong.

Also: How I test an AI chatbot’s coding ability – and you can, too

This was such a powerful feature that I added it to my testing suite. Now, we can use it to validate how well an AI can determine why a block of code is broken.

4. I used AI to help me do customer sentiment analysis

After doing a fairly measurable update to one of my products, I was concerned about whether or not users liked it or found it problematic. I get tech support requests, but not usually enough to gauge the sentiment of my user base. But I do capture users’ reasons for uninstalling my product, which gives me a big database of English language reasons and data fields.

Also: How ChatGPT’s data analysis tool yields actionable business insights with no programming

I fed all of that to the AI, and it gave me back interesting charts and data analysis without requiring weeks of programming. In a matter of minutes, I had my answer: users generally were happier with the new release. Yay.

5. I used AI to create images for my albums

One of the biggest ways AI adds value to my workflow is in graphics and video production. While I’ve been a fairly good creative director, I am not an illustrator. So, I found the ability to do text-to-image (and all the related graphics features) hugely helpful.

Also: How AI helped get my music on all the major streaming services

Last year, I used AI to create the album cover for my EP “House of the Head.” This year, I used Midjourney to create the album cover for my second EP, “Choices are Voices.”

6. I used AI to fill and fix my photos

I take a lot of photos, some for myself and some as illustrations for my many projects. Some of those photos enter the camera perfectly composed, but most of them have something that needs to be cleaned up — whether it’s a bit of clutter in the background, a desk that doesn’t extend far enough, or distractions that take away from the image.

Also: How to use Photoshop’s Generative Fill AI tool to easily transform your boring photos

I’ve been using Photoshop to clean up my images for decades. But when generative fill popped onto the scene, along with some of Photoshop’s other AI improvements, including distraction removal (for power lines and such), it helped me save images that might otherwise have been unusable.

7. I used AI to generate vector graphics

When it comes to Adobe products, Photoshop is my daily driver. I use Illustrator, but mostly to resize and modify images from clipart libraries I’ve bought for use in giant PowerPoint presentations. I also use Illustrator to configure and set up cutting instructions for my Glowforge laser cutter, which handles PDFs quite nicely.

Also: Adobe Illustrator’s new generative vector fill is game-changing (even if you can’t draw)

Earlier this year, Adobe introduced a form of generative fill that creates vector-based images, which you can then ungroup and deconstruct into components that are easy to resize and reuse. While I haven’t needed these features for much of my workflow, they’re impressive. I hope to use them more over time.

8. I used AI to generate monthly images for e-commerce

My wife Denise owns a small e-commerce business for people with a particular interest in a very popular hobby. She hosts a very active Facebook group, where one of the group’s favorite activities is creating monthly projects based on a theme she presents at the beginning of the month.

Each month, I use Midjourney to make a picture that evokes that theme. I’ve tried DALL-E 3 and Photoshop, but neither does justice to the subject matter. But Midjourney mostly nails it. So, I use Midjourney to create the image and often bring the image into Photoshop to zhush it up. These images provide the inspiration that encourages hobbyists to craft dozens of impressive new creations each month.

9. I used AI to create moving masks in video clips

Masking in video is the process of separating one part of the video from another. For my videos, the idea is to separate me from the background so I can adjust the color, brightness, and contrast of my face and body separate from the background. I’ve also used masking to hide the background, replacing it with something new.

Also: My top 2 productivity hacks for video editing that save me time – and wrist pain
This used to be done with green screen and a process called chromakey. But Final Cut Pro (my video editing program of choice) introduced an AI-powered magic mask that’s pretty much able to do everything a green screen could do, but with regular video footage. It’s still not perfect, but it’s not half bad, either.

10. I used AI to clean up crappy audio in my videos

Before I moved to the DJI Mic 2, I used a fairly expensive but shockingly bad Bluetooth lavalier mic. Sometimes, the takes were good. Other times, they were just plain horrible. I often had to do retakes when it became clear that the audio in my footage was unusable.

One day, I completed a particularly harrowing take that could not be redone. Despite all my pre-checks, the audio turned out to be painful to listen to. So I uploaded it to Adobe’s Enhance Speech AI tool, and it worked wonders with my terrible take. I’ve used it a few more times since then.

11. I used AI to auto-track me while filming

When a television actor or host moves around on set, there’s usually a camera operator who follows the motion. When you do your own YouTube videos, you’re pretty much on your own. Some of the biggest YouTube channels mimic TV set operations, but my channel is just me, some tools, and a bit of slightly cranky attitude.

Also: I’m a long-time YouTube video producer – these 3 AI tools help me do it better and faster

I have tried tracking gimbals a bunch of times before, but it wasn’t until I found the Hohem iSteady V3 that I had any real success. Its tracking is quite good due to its on-device AI image analysis, and it also works without an app, which is one of my favorite features. I just turn it on, mount my phone, make one of a few silly gestures to tell it what I want, and it just works.

12. I used AI for project research

For example, I needed to bend a piece of 1/4-inch thick aluminum 90 degrees for one of my projects. Different grades of aluminum are softer or harder and will either bend easily or require heat to avoid cracking.
I found out the model number of the aluminum I bought at my local hardware store and asked ChatGPT for the alloy number, which indicates hardness. Then I asked ChatGPT, based on that alloy number and the 1/4-inch thickness, whether I needed to heat the metal before bending. ChatGPT said yes, I needed to heat it.

13. I used AI to help me write articles in the bathroom

And in bed. And in the workshop. And while making eggs. And at the hardware store. And in a bunch of other places and circumstances.

Also: How iOS 18 turned my Apple Watch into the productivity tool of my sci-fi dreams

With iOS 18, Apple added the ability for the Voice Recording app to transcribe voice recordings. I combined that with my Apple Watch. Now, no matter what I’m doing, if I have some useful thoughts, I can record a few sentences or paragraphs right into the Voice Recording app on my phone, then later transcribe them and drop them into an article. Since I do a lot of thinking about articles when I’m not in front of the keyboard, this is a great way for me to capture those nuggets and save time.

14. I used AI to help improve 3D print quality

I actually used AI with 3D printing in two fairly different areas. One was analytics. I had a 3D test print (called a Benchy) that printed far faster using instructions straight from the factory than anything I was able to reproduce using standard software tools. So I fed the G-code (robot instructions) to ChatGPT and asked it for an analysis. The conclusion: the factory version tweaked certain speed and quality settings to make it run faster.

Also: How ChatGPT scanned 170k lines of code in seconds, saving me hours of work

Two of my newest 3D printers also claim to use AI to help improve print quality and manage jams. I can’t independently verify whether they’re AI washing or really using AI in their printer operations. But I can say that both produce nicer prints — with higher levels of reliability than printers whose vendors don’t claim to use AI.

15. I used AI to fly my drone

Modern drones are amazing. Back in the day, I had a radio-controlled helicopter, which I crashed. A lot. Flying a helicopter is hard. You have no idea how hard it is until you try with an RC toy. The skill of real helicopter pilots flying real people is jaw-dropping.

Also: Using a 4K drone to diagnose roof damage

But drones make that all seem simple. They have built-in AI that understands all the aerodynamics of lifting an object into the air and making it go where you want it to go. Yesterday, in fact, I sent up my DJI Mavic Pro to get some property pictures for laying out a complex security camera grid. After completing its mission, I just tapped “Return to Home.” It handled all the return navigation and landed itself perfectly and smoothly right in front of me.

What I’d like AI to do for me in 2025…

The pinnacle of AI, for me, would be a robot that can go to the kitchen, brew and prepare a cup of coffee, and bring it to me on the couch. “Alexa, bring me coffee,” should result in an actual cup of coffee, prepared exactly as I like it. Otherwise, what good is it to be living in the future?

Also: The AI I want to see in the world: 5 ways it could manage my Gmail inbox for me

Otherwise, here are some things I’d like to do with AI. Note that the video editing items below do exist in some applications, but they’re not available in Final Cut, which is my editing tool. Other capabilities either don’t exist or aren’t good enough to help with my workflow output.

Here we go. My wishlist.

  • I’d like to use AI to remove pauses and other waste in video clips. This process is very time-consuming, and it should be possible with a click of the mouse.
  • I’d like to use AI to clean up ums and uhs from audio. This is the same idea as above, but for audio takes.
  • I’d like Final Cut to provide the same kind of audio repair that Adobe offers in its online Enhance Speech tool.
  • I’d like to use AI to generate video clips for music videos and B-roll for YouTube videos. We’re seeing the beginnings of this with OpenAI’s Sora, but I’d like it as straightforward and commonplace as text-to-image is now. I’d also like it to produce clips of up to 30 seconds.
  • I’d like to use AI to actually manage my incoming email and filters. I don’t need AI to make my emails sound friendlier. I need it to help me sift through the metric ton of crap I get each day.
  • I’d like to use AI to dimension my 3D models. When I’m designing something to fit a real-world part, I spend hours to days taking caliper measurements, translating that into a 3D design, testing, printing, and redoing it. I’d like to be able to just take a few pictures of an object and have the AI fully dimension the entire thing so I could then translate that into my project.

AI has come a long way in the last few years, but it has a big future ahead as well. What have you used AI for in 2024? What do you want AI to be able to do for you next year? Have you used AI as a novelty, or have you integrated it into your regular workflow? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here