Agent

Don’t miss these ChatGPT and AI red flags

Were you a late adopter to using social media to grow your business? If so, if you could go back in time, would you have been one of the first real estate professionals that began marketing on Facebook, Instagram, or TikTok to get ahead of the crowd and market saturation? 

If you have wished for a do-over, I believe that AI chatbots are just that – your chance to become a leader using this technology before over-saturation. Case in point: only 14% of U.S. adults have used ChatGPT although 58% have heard of it, meaning there is potential for you to be a trendsetter.

Or, if you were an early adopter of those other platforms like myself, you are probably already an early adopter of AI chatbots like ChatGPT (I use it alot!) or perhaps you have a custom bot.  

No matter where you fall on the technology adoption life curve, let’s review some important considerations for all.

Living in the future

I saw a meme on Instagram that proclaimed that we really are living in the future that the Jetsons predicted. The reality and progression of artificial intelligence (AI) makes a once futuristic cartoon seem to be run-of-the-mill. Rosey, the Jetsons’s robot, was capable of conversations and if push came to shove, could complete the work of everyone in the home. 

Mirroring the Jetsons’s life, today AI are helping real estate professionals market, list homes and find clients. Various chatbots are giving us property descriptions, captions for social media, scripts for videos, text for listing updates and presentations, are conversing with prospects and existing clients, and so much more. These chatbots are very close to being the next best thing to having our own personal Rosey the Robot.

It is an exciting time! Yet, with all new technologies, there are bound to be cautionary tales of what not to do. Unfortunately, in the real estate industry, mistakes are not as simple as “oops, I did it again”. AI mistakes may cost us our reputation, license and livelihood. Understandably, I prefer to get in front of such issues and I am sure you would too.

Red flags when using ChatGPT

Here are four red flags when using AI and how we can reduce them impeding our business:

  1. Thinking that an AI chatbot cannot go rogue. They have like this example.
    • Lesson: Always check updates, proofread your AI’s output and be ready to follow-up for damage control immediately.
  2. Creating or using a bot that directs people to or away from certain areas of town (the outcome is the illegal practice of steering, y’all!).
    • Lesson: When creating a custom bot (this is especially applicable to companies rolling out large-scale efforts like Redfin or Zillow), please be sure to partner with the local fair housing center in order to do beta testing (and re-testing) and pre-launch audits to ensure compliance as well as show an affinity for long-term accountability.
      That way if/when unforeseeable issues pop up later, it is easier to intercept and correct the problem and show that, through accountability, there was no malicious intent to do harm (a.k.a. a legal standard that may save you bucks and other penalties).
  3. Allowing bots to collect sensitive information about prospects and clients because this may lead to various forms of fraud and/or identity theft. Phishing can happen with AI too as documented here.
    • Lesson: Program your bots not to take sensitive information, make sure updates  do not override this, and alert (and remind) clients to never giving sensitive information via a chatbot.
  4. Expecting AI to have the most current rules and laws in real-time for a litigious industry like real estate.
    • Lesson: Stay abreast of legal changes and proofread autogenerated text to ensure compliance with the laws of the land (like fair housing and fair lending.)

Armed with degrees in business and legal studies, Lee Davenport, Ph.D.  is an international real estate educator as well as a former RE/MAX managing broker.