Developers-Drawing-The-Owl
27 postsAre you doing all you can to protect the sensitive information your callers trust you with? As organizations leverage more sensitive information, securing that data is more important than ever. Twilio offers numerous ways you can protect your sensitive data—but it’s up to you to implement the resources Twilio provides responsibly. In this article, learn how to encrypt and redact data collected from Twilio Programmable Voice, using <Gather> TwiML with Twilio Serverless Functions and Voice PCI Mode. Things you'll need In order to follow this tutorial, you will need: A Twilio account A phone number that can receive calls What are you building? You will build a simple interactive voice application to handle caller authentication. A Function will be used to prompt the caller for the following sensitive information via <Gather> TwiML. "Please enter your 4 digit PIN" "Please enter the last 4 digits of your payment card number" As soon as this information is received from the caller, it will be encrypted. From that moment on, the data will remain encrypted until it reaches its destination. In a real-world implementation, the destination would likely be your backend service for processing. But here, another Function will act as a “dummy API” to demonstrate how the decryption would be performed. You will also enable Voice PCI Mode to redact gathered information in Voice call logs. The Before Before jumping into the solution, take a look at what your logs would look like without encryption or redaction. Twilio Functions will log any error generated from a Function to your Twilio Debugger. In this example scenario, you will log an error if certain specific digits are not entered. You can see the plain-text request parameters in the error received by the Debugger. Programmable Voice will also log the digits collected in plain-text in the Voice call log: You can find this information if you have access to Call Logs or the Debugger. The After The data visible after implementing this solution is less vulnerable. By the end, your Function log will show more secure, encrypted values: And your Call log will show *REDACTED*: Get Started Twilio Functions To follow along with these instructions, use the Twilio Console’s Function Editor. Advanced developers should consider using the more robust Serverless CLI to create, deploy, and maintain Functions. Create a Service Functions are created and contained within Services: Log in to the Twilio Console and navigate to the Functions tab. Create a Service by clicking the Create Service button and adding a name such as encrypted-gather-sample. Add Dependency In this solution, the axios library is used to make a request to your “pretend” backend service (the decrypt-gather Function) for processing. Add axios as a dependency to your Service. Create an Environment Variable This solution requires a secret key, which will be used to encrypt and decrypt the sensitive data. Your secret key string must be at least 32 bytes in length. Keep this secret private. To create a random secret, the following command line can be used with Mac/Linux: xxd -l32 -p /dev/urandom Alternatively, this secret can be generated through Node.js: crypto.randomBytes(32).toString('hex') Add an Environment Variable within your Service that stores your key. For testing purposes, the following 32-byte secret key can be used. a154eb4c759711bc2538a7cc021e9e9f17dd8aa63151c62ca28a82a4a404203d Create AES Encryption Function First, create a Function to handle encryption and decryption of data using symmetric-key cryptography. Node.js Crypto Node.js offers a built-in cryptography module called Crypto. Crypto provides several useful methods, like createCipheriv() and createDecipheriv() which allow us to specify what kind of block-cipher algorithm to employ. GCM Block Cipher Advanced Encryption Standard, known as AES, is a technique for protecting data using encryption algorithms. AES can be achieved through a variety of modes of operations. In this solution, you’ll be using GCM, Galois/Counter Mode, a symmetric-key cryptographic block cipher which is preferred for its speed and strength. Code Create a new Function called AES with the following code. const crypto = require("crypto") const ALGORITHM = { BLOCK_CIPHER: "aes-256-gcm", AUTH_TAG_BYTE_SIZE: 16, IV_BYTE_SIZE: 12, } exports.encrypt = (plainText, key) => { const nonce = crypto.randomBytes(ALGORITHM.IV_BYTE_SIZE) const cipher = crypto.createCipheriv( ALGORITHM.BLOCK_CIPHER, Buffer.from(key, 'hex'), nonce, { authTagLength: ALGORITHM.AUTH_TAG_BYTE_SIZE } ) const cipherText = Buffer.concat([ nonce, cipher.update(plainText), cipher.final(), cipher.getAuthTag() ]) return cipherText.toString('hex') } exports.decrypt = (cipherText, key) => { cipherText = Buffer.from(cipherText, 'hex') const authTag = cipherText.slice(-16) const nonce = cipherText.slice(0, 12) const encryptedMessage = cipherText.slice(12, -16) const decipher = crypto.createDecipheriv( ALGORITHM.BLOCK_CIPHER, Buffer.from(key), nonce, { authTagLength: ALGORITHM.AUTH_TAG_BYTE_SIZE } ) decipher.setAuthTag(authTag) const decrypted = decipher.update(encryptedMessage, '', 'utf8') + decipher.final('utf8') return decrypted } This Function should be set to a visibility of "Private", as it will only be used from within another Function in the same Service. Create encrypted-gather Function Next, create the Function that will perform the sensitive <Gather> operations. This Function will be configured as the incoming Phone Number voice webhook in a later step. From this Function, the digits entered by the caller will be encrypted as soon as they are received, and sent in their encrypted state to the final, “destination” Function. Code Create a new Function called encrypted-gather with the following code: const axios = require('axios') const AES = require(Runtime.getFunctions()['AES'].path) exports.handler = async function (context, event, callback) { const twiml = new Twilio.twiml.VoiceResponse() const secret_key = context.AES_SECRET const functionUrl = `https://${context.DOMAIN_NAME}/encrypted-gather` const dummyApi = `https://${context.DOMAIN_NAME}/decrypt-gather` const step = event.step || "getLast4CC" switch (step) { case ("getLast4CC"): gatherLast4Card(twiml, functionUrl); break case ("getPin"): let encryptedCardDigits = AES.encrypt(event.Digits, secret_key) gatherPin(twiml, encryptedCardDigits, functionUrl) break case ("processData"): let encryptedPinDigits = AES.encrypt(event.Digits, secret_key) await processGatheredData(twiml, event.encryptedCardDigits, encryptedPinDigits, dummyApi) break } return callback(null, twiml) } const gatherLast4Card = (twiml, functionUrl) => { const gather = twiml.gather({ action: `${functionUrl}?step=getPin`, method: 'POST', input: 'dtmf', timeout: 10, numDigits: 4, }); gather.say('Please enter last 4 digits of your payment card number.'); return gather } const gatherPin = (twiml, encryptedCardDigits, functionUrl) => { const gather = twiml.gather({ action: `${functionUrl}?step=processData&encryptedCardDigits=${encryptedCardDigits}`, method: 'POST', input: 'dtmf', timeout: 10, numDigits: 4, }); gather.say('Please enter your unique 4 digit identification number'); return gather } const processGatheredData = async (twiml, encryptedCardDigits, encryptedPinDigits, dummy_url) => { // make request to "dummy" api endpoint - example decrypt function try { const apiResponse = await axios({ method: 'post', url: dummy_url, data: { encryptedCardDigits, encryptedPinDigits } }) twiml.say(`Thank you. Your account number is ${apiResponse.data.account} and your balance is ${apiResponse.data.balance}`) } catch (e) { twiml.say(`We were not able to locate you in our system. Goodbye.`) } return twiml } This Function should be set to "Protected", as it will be called from within Twilio and can be secured with the X-Twilio-Signature header. When implementing this solution in production, you’ll need to change the decryption “dummyApi” variable to the URL of your backend service. const dummyApi = `https://${context.DOMAIN_NAME}/decrypt-gather` How is it encrypting? At the top, you import the functions created in the previous step with the following line: const AES = require(Runtime.getFunctions()['AES'].path) Then, you define your secret by getting it from the environment variable: const secret_key = context.AES_SECRET And, most importantly, any sensitive information is wrapped with the encrypt function. (In this case, <Gather>'d information is passed as the Digit parameter, and can be accessed from the event object.) let encryptedCardDigits = AES.encrypt(event.Digits, secret_key) This handles the encryption of the gathered information. Create decrypt-gather Function Finally, let’s create a Function to demonstrate how to decrypt the sensitive data. In a production environment, this would likely be a request to your backend service that processes the caller information based on your business needs. In this solution, a third Function will act as the “backend service” that processes this data. This Function will receive the encrypted digits and decrypt them for further processing. Code Create a new Function called decrypt-gather with the following code: const AES = require(Runtime.getFunctions()['AES'].path) exports.handler = function(context, event, callback) { const response = new Twilio.Response() const secret_key = context.AES_SECRET const last4card = AES.decrypt(event.encryptedCardDigits, secret_key) const pin = AES.decrypt(event.encryptedPinDigits, secret_key) //hard-coded values used for testing purposes if (last4card === "1234" && pin === "4321") { response.setBody(JSON.stringify({ account: "AC12345678", balance: "12.55" })) } else { response.setStatusCode(404) response.setBody("No data found") } return callback(null, response) } This Function’s visibility will be "Public", as it is pretending to be an external service. How is it decrypting? At the top, you import AES functions again and define the secret_key as a variable. Then you call decrypt on the information that was previously encrypted: const last4card = AES.decrypt(event.encryptedCardDigits, secret_key) Additional Configuration Phone Number Webhook For the sake of simplicity, connect this Function directly to a Phone Number. To configure the Phone Number: From the Twilio Console, navigate to the Phone Numbers section Select your phone number, then scroll to the Voice & Fax section Set the encrypted-gather Function as the A call comes in webhook under Voice Configuration Save changes If you hope to trigger this from Twilio Studio, check out this blog post to learn more about how to incorporate this solution securely with Studio. Enable PCI Mode Almost done! You’ve secured the Functions, but there’s still one more area where Twilio retains gathered digits in plain-text – Voice call logs. Below is a screenshot from the Twilio Console for an inbound call with the encrypted <Gather> solution implemented. Even though Functions secured the data, Voice hasn’t. There’s only one way to prevent this data from being displayed in the Call log, and that’s with PCI Mode. Enabling PCI Mode on your account will redact all data captured from any <Gather> operation. Enabling PCI Mode on an account is a one-way street. Once it’s on, you won’t be able to turn it off. Redaction may make troubleshooting Voice issues more challenging. If you’re serious about capturing sensitive information securely... Navigate to the Twilio Voice Settings in the Twilio Console. (In the left navigation pane, click on Voice > Settings > General.) Click on the Enable PCI Mode button. Save changes. Make a call Now it’s the moment of truth—it’s time to place a test call to the phone number. From here, there are two paths to take. If you enter 1234 as the last 4 digits of your “credit card” and 4321 as the unique PIN, you’ll hear some “dummy” account information returned on the call. This is an example of a successful API response. If you enter any other digits, it will behave as though you aren’t a known user and return a 404 response. This is an example of an unsuccessful request, which will log an error to the Twilio Debugger. How do I know it worked? Follow the unsuccessful path and take a look at your Error log in the Twilio Console. For the 404 error response, you’ll find an 82005 Error from Functions with the following details: This is good. Without the encryption, an unsuccessful response would have logged those variables in plain-text. But now the data will log in its safer, encrypted form. You can also check your Call log to confirm the digits show *REDACTED* there as well. Is this secure? Following this tutorial (including the optional PCI Mode steps) would prevent the data from logging in plain-text anywhere within Twilio’s ecosystem, and it would prevent anyone at Twilio from being able to decrypt your sensitive data – making this an improvement over the default. However, the secret key used for encryption and decryption is stored as an Environment Variable on the Service, meaning users to whom you grant Twilio Functions access would be able to extract the key and potentially go through the effort to decrypt the values. Final Recommendation If you are making modifications to the sample code provided, please keep in mind that Functions retain console warnings and errors within internal Twilio systems and in the Twilio Debugger for some time. Do not use any of the following console logging methods with any sensitive, unencrypted data: console.log() console.warn() console.error() Conclusion In this lesson, you learned how you can protect data collected from <Gather> TwiML with encryption via a Serverless Function and redaction through Voice PCI Mode. If you want to collect payments from your callers, consider the fully PCI-compliant Twilio <Pay> feature. To learn more about PCI compliance at Twilio, check out the documentation and responsibility matrix. Users trust you to keep their sensitive information private. Make sure you respect and retain that trust by doing all you can to secure the data you process. Bry Schinina is a developer and educator who deeply appreciates when companies don’t expose private information. She works as Tech Lead and Sr. Technical Account Manager at Twilio, solving complex problems and helping organizations succeed with their digital engagement platform. She can be reached at bschinina [at] twilio.com.
SMS in the contact center is an endlessly valuable tool. You can send timely updates and notifications to your customers, provide follow-ups after interactions, or chat in real-time or asynchronously to offer support. The ability to communicate with a customer on their channel of choice is fundamental to great service. Genesys Cloud is a contact center application, and you now have the ability to connect your own Twilio account to it for sending and receiving SMS with Twilio Messaging. Follow along to quickly configure Genesys Cloud BYO SMS with Twilio. Prerequisites You won’t need a lot to get started. A Twilio account (free or paid). If you are new to Twilio, click here to create a free account A mobile phone number that can send and receive SMS Genesys Cloud CX 2 or higher (or Digital Add-on) licenses Different countries, territories, and carriers have different registration requirements for phone numbers. You’ll need to fulfill the requirements for the number type you purchase before continuing the integration. Provision your Genesys webhook Before importing your Twilio numbers into Genesys, you first need to provision your Genesys webhook and add it to your phone number configuration. Find the Genesys Cloud webhook URL for your region Now, go to your Twilio console and open up the configuration for your phone number. Scroll down to Messaging Configuration and set A message comes in to Webhook and paste in your Genesys webhook. Sending in the USA? If you are using US 10DLC numbers and have completed your A2P10DLC registration then your numbers and registration will be tied within a Twilio Messaging Service. In this case, ensure that your Messaging Service has Defer to sender’s webhook selected in the Integration menu, and configure your phone number as per above. Set up BYO SMS for Twilio Time to open up Genesys Cloud and visit the Admin menu. Under Integrations, locate the Bring Your Own Twilio SMS Account Configuration tile and select Install. If you cannot locate the Twilio SMS tile, you may need to add it from the Genesys App Foundry Open Configuration and then Credentials, and enter in your Twilio Account SID and Auth Token. Click OK, Save, and set to Active. Now your Twilio account and your Genesys account are connected. The only thing left is to import your Twilio phone number. In Genesys Cloud, go to Admin > Message > SMS Number Inventory. Then, choose Add Numbers and Import. From the integration list select Twilio, then enter in your number type (either Local, Mobile, or Toll Free) and then type in your Twilio phone number in E.164 format. If you are importing a Short Code, you will also need to enter the country from the drop down box. Click Confirm Import and Genesys will load your Twilio numbers into its inventory. For more information see Managing your SMS number inventory. Even after importing, your phone numbers are still managed and controlled from your Twilio console, and can still be used for other things, such as voice. Configure your queue for SMS interactions Now that your Twilio number is available in Genesys Cloud, we need to configure it for inbound and outbound routing. To do this, we will configure two things: a Message Routing Flow to handle inbound traffic and a linked queue to enable agent-initiated outbound traffic. Begin by going to Admin > Routing > Message Routing. Create a new flow to direct inbound messages to the appropriate queue. Once your flow is saved, go to the Message Routing page and under Addresses, click + and select the phone number to add it to the inbound flow. In your Genesys Cloud console, go to Admin > Contact Center > Queues Open the settings for the queue that you wish to add your SMS capabilities to. Select the Message tab and add a new outbound SMS number. Add in your newly provisioned Twilio number and click Save. This will enable you to send outbound SMS from this queue. Send a test message Now it’s time to try it out. Let’s send a test message to ourselves. Go to your Genesys Cloud console, create a new interaction and choose SMS. Select your configured queue and enter your own phone number to send to. Make sure you enter the number in E164 format. If you have Genesys Cloud integrated with a CRM and are pulling the phone numbers from there, make sure that the CRM has numbers saved in the correct format. Select Compose and begin writing your message. Click Send. Once your message comes through, you can try replying to it also. And there you have it, your Genesys Cloud platform is configured to use your Twilio account for SMS. You are now free to proactively reach out to your customers, and allow them to contact you on their preferred channel. All of this will enable you to provide an even better customer experience. But wait, there’s more... If you are sending to Australia, Singapore, or other selected countries, you can utilize a different sending option: Alphanumeric sender IDs, also called alpha senders. Read on to learn how to configure alpha senders in your Genesys Cloud account. Alpha Senders for Selected Countries Alpha senders are a great way to provide brand recognition, and build trust with your customers. Genesys Cloud does not offer alpha senders as an option in the number inventory, however Twilio is still able to support alpha senders to some countries via this integration. Countries such as Australia require pre-registration in order to enable alpha senders, and as part of this process Twilio applies a sender override for the destination country. This means that regardless of which number you send from, if the message is going to a country with alpha sender configuration the message will appear from the alpha sender, NOT the phone number. In order to configure an alpha sender for your Genesys Cloud instance, follow the above steps to connect your Twilio account and import a phone number. The number you choose is not relevant here. We recommend selecting a US mobile number. If you do not intend to send to the USA via this number then you do not need to complete A2P 10DLC registration. Next, complete the pre-registration in your Twilio console for your desired country. It may take a few business days for your registration to be completed. Once it is, you will receive an email from Twilio support advising that your alpha sender is configured on your account. Any messages you send to the specified country will come from the alpha sender. Messages to countries that do not require alpha sender registration will be sent from the phone number that you imported. Alpha senders are 1-way only, meaning customers cannot reply to any messages. Customers must be able to opt-out of messaging. We recommend that you insert an opt-out URL at the end of messages. As pre-registered alpha senders are applied as an account override at the Twilio level, you will not be able to send any messages from a long code to that country from that Twilio account. If you need to have both alpha senders and long codes available, you must configure your alpha sender in a Twilio sub account, and complete the Genesys Cloud BYO SMS integration for that account as well. This configuration option is only available for direct Twilio customers. If you are an ISV please speak to sales. The full list of supported countries for alpha sender configuration in Genesys Cloud can be found here, and are marked as Yes - Registration Required. Countries that support alpha senders but do not require registration are not eligible for this configuration. Keen to learn more? Check out the Twilio Docs for more great resources on how to use messaging to its full potential. Courtney Harland is a Senior Solutions Engineer at Twilio, with over 10 years of experience in the Contact Centre industry. As a proud millennial, she has a vested interest in helping businesses improve their messaging capabilities, so that she doesn’t have to use the phone. She can be reached at charland [at] twilio.com
In this post, we will learn how to quickly create a web app that allows you to export a list of your Twilio numbers, transfer them within your main account, and check the usage for each number and asset of your Twilio account. You won’t need your own server to set it up, nor do you need to know how to code. You don’t even need to read the code if you don’t want to. I’ll show you how to do all of these operations using Twilio’s Functions product, which you can use directly from within your Twilio Console. Demo App Before I show you how to create the app on your Twilio account, you can test it on our Demo account. The video below will also show you how the app works. .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } How are we going to do this? You won’t need your own server, as the app will be hosted on our Twilio Functions service. Detailed information on Twilio Functions can be found here. You can host JavaScript code, HTML, CSS and other assets needed to create the app. We call this style of development serverless computing, since Functions will automatically provide resources when you use the tool. The code for the Function will be copied from our GitHub repository, which can be found on GitHub. By copying the code, it means that you also don’t need to know how to code. Note, you are free to host the code on your own server as well, and even integrate it in your own web-site. Prerequisites Now that we know where to find the code, let’s see what is needed from you in order to create the app. For this tutorial you will just need: Any browser (best if it’s up to date) A Twilio account (sign-up to Twilio if you don’t have an account already) Build the app and use it in the browser In the below video we will show the steps needed to create the app in the Twilio Function: .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } The steps mentioned in the video are also listed in detail below. The function should be created under your Main account. You will have the option, in the created apps, to search the logs of any subaccounts located under your main account. Steps to take: Open the GitHub repository for the desired function. We will reference this as we create the Function in Console. Create a new Function in your Twilio Console by going to Functions & Assets > Services > Create Service. Add a new function file (button in top left corner) and give it the same name as the function in the /functions folder on GitHub (without the .js extension). Repeat step 3 for all files in the /functions folder. Open the /assets directory in the GitHub repo. Create an asset file (button in top left corner) and name it index.html (with the .html extension). Open the file in the GitHub repository and copy the code content from the index.html file. Do the same thing for the .css files that are in the assets folder. For these assets, when you open them in the online editor, in the bottom-right corner it should say “html” and “css” respectively. Click on Environmental Variables and add a new key called “Password” with your desired value (this will be your password to use the app). Save all functions and assets in the online editor. In the online editor, next to the name of each file, you will see the current access level – set it to Public for all of them. By default they are Protected. Click on Deploy All, and wait for the deployment to be completed. Click on Dependencies and update some items as needed: The twilio module with version ^3.61.0 The @twilio/runtime-handler with version1.2.1. Set the Node version to 18 at the top. Click Deploy All, and wait for deployment. Click on index.html from the list in Console and in the bottom right corner click on Copy URL, paste it in a new browser tab, and hit Enter. This will open the app. Bookmark the app URL, as you will be able to use it at any time after you have completed the above steps. In case you lose the URL, just open the Function in Console and repeat step 16. The app will have the same look as you saw in the demo video above – however you might see additional functions in your app, since I might have added more of them after we recorded the video. What will also be missing in your app is the list of subaccounts, exposed password, and some other info meant only for the demo. Rest assured that the core functionalities will be there! As extra help, to make sure everything is configured correctly on your end, please refer to the below screenshots. Image 1. Final Function in Console Image 2. Environmental variables Image 3. Dependencies What else can you do with this app and what should you keep in mind? So now that we have seen how to create the app that will help you manage your numbers and check usage, let’s explore app customization, security, and more. The GitHub link I shared is part of a larger repository which contains Twilio code samples. Each directory contains assets and functions that you can copy to a new Function in the same way as described in the numbered list in the “Build the app” section above. This means that you don’t need to create the app with every function to start, but only the ones that you would like to have – each one can be created as a separate Function, with a different URL and password. In addition, a great benefit of Functions is the control on how they can be accessed (Function visibility). Because the created app is publicly accessible in a browser (but it can only be used if you know the password), what you can do is keep the functions Private or Protected, and only set them to Public when you need to use them (see step 12 above for more). Apart from controlling Function visibility, to make sure the app is as secure as possible, make sure you set a strong password, and only share the link of the app and the password with people you trust and who should be using the app. A very important thing to keep in mind is that you need to click on the Deploy All button in the online Function once you are done making your changes. A log will be visible in case there are errors, or the Function did not deploy properly. Finally, you might be wondering if Twilio charges anything to use the Functions. Twilio charges for Functions usage - there is a free tier which resets monthly, and you should be aware of the usage costs. We can’t wait to see what you build! Have fun! Thank you to Jyden Tyree for recording the videos that accompany this post. Danijel Šumberac is a Senior Manager at Twilio with many years of leadership experience and knowledge in coding, SQL, and Data Analysis, and is highly interested in empowering users to create their own API solutions. He can be reached at dsumberac [@] twilio.com.
Twilio provides straightforward communication REST APIs with great documentation. In addition, developers have long used Twilio’s API explorer to get started building applications. However, on December 15th, 2023 we will End Of Life the tool. In this post, we will learn to use Bruno, an Open Source API client as an API Explorer replacement. Why an API client? With an API client, developers can make requests to our REST API, explore responses from Twilio’s servers, and copy code snippets for use in their applications. A developer can test operations such as buying a phone number, sending a text message, or making a phone call programmatically before writing a lot of code. This process ensures that building powerful telephony applications is a seamless process, and gives developers confidence to integrate the right Twilio APIs into whatever application they are building. In this blog post, I will show you how to send an SMS without coding using Bruno as the API client. Prerequisites Twilio Account: Set up a Twilio Account (if you don’t have one yet,sign up for a free Twilio Trial) Phone Number: Note that most countries have differing registration requirements depending on number type. For some API operations, you’ll need to complete those requirements before exploring API calls. Twilio OpenAPI Specification: Download Twilio’s OpenAPI Specifications and unarchive them into a working directory on your local computer. We will need to import this into Bruno. Bruno - Download: Install the latest version of Bruno for your Operating System Mobile Phone: A SMS-capable device to receive your test message Set up Bruno Import Twilio OpenAPI Specifications into a Bruno Collection Open the Bruno Application. Navigate to Import Collection. Now, click Import Collection. Select OpenAPI V3 Spec in the opened dialog box. Browse to your location where you extracted the Twilio OpenAPI Specifications. The Twilio OpenAPI Specification file twilio_api_v2010.json is located in the folder twilio-oai-<version> -> spec -> json -> twilio_api_v2010.json You will be prompted to add a location where you want the specs to be imported. Click Browse, and select a location on your computer. Configure Bruno to use Twilio APIs Open up your Twilio Console We will need the following information from the Home Screen Account SID Auth Token My Twilio phone number Remember, this number should be registered for A2P10DLC or Toll Free Verified, or otherwise fulfill the requirements of the number type in that number’s country. Set up the Twilio Environment for Bruno Now when you open Bruno, you will see Twilio APIs on the left hand side menu: In the upper right corner of Bruno, drop down to Configure: Create the new environment. Give the environment a name so you know what Twilio account it is associated with: Now, we will add 3 variables from the information we collected from our Twilio Console. Add the following names with the respective values we collected by hitting the + Add Variable button TWILIO_ACCOUNT_SID TWILIO_FROM_NUMBER TWILIO_AUTH_TOKEN Be sure to click “Secret” for the TWILIO_AUTH_TOKEN Save our Environment! Expand to Twilio API -> Api20100401Message -> POST CreateMessage in the opened Bruno Collection: Select the environment we just created to use with this Bruno collection Send our first SMS Navigate to the following in your Twilio API collection: Twilio API -> Api20100401Message -> POST CreateMessage At a minimum, to send an SMS with Twilio API, the API requires: Account SID - Already configured under auth Auth Token - Already configured under auth A From number - A Twilio Phone number you are sending the message from A To number - where you want the message sent A Body - What the messages is We need to set up our Basic Authentication to use the Twilio credentials we set up in our Environment. Under the Auth Tab for CreateMessage, change the Username to {{TWILIO_ACCOUNT_SID}} and Password to reference the {{TWILIO_AUTH_TOKEN}} environmental variable. Now, under the Body tab, you will see all the parameters available for the CreateMessage endpoint. We will keep only the required parameters to send the SMS. Those are: To From Body Using E.164 format, enter the number you wish to send an SMS to. For example, +14155551212. We will use the {{TWILIO_FROM_NUMBER}} to send From, this will use the environment variable in {{}} to reference it as a variable. Finally, input the message we want to send in “Body”: Hi, My First Twilio SMS Finally, update the POST url to include your account’s Account SID. We are changing {AccountSid} to {{TWILIO_ACCOUNT_SID}} in the POST URL. Change the URL from: To: Click the arrow (->) button to send the API request. You should now receive the message and see the Twilio Response. Congratulations on sending your first message using Bruno as an API client! What next? Check Logs If your message was not successful, check the Twilio Logs in your Twilio Console. Did Twilio receive the message? Check the Monitor, Logs -> Messaging section. Learn more about Messaging API We only covered the basics with the CreateMessage endpoint. I’ll give you a few suggestions around where you might explore next: Maybe you’d like to explore sending messages: https://www.twilio.com/docs/messaging/api/message-resource#create-a-message-resource Remember all the boxes we un-checked? This document explains what else you can do with the message endpoint. https://www.twilio.com/docs/messaging/api/message-resource#create-a-message-resource For example, let’s enable MediaUrl and send a picture. Check the checkbox and provide a MediaUrl, such as: https://upload.wikimedia.org/wikipedia/commons/9/93/1%25_20-%25_20EEE2HHO.gif Conclusion We’ve only scratched the surface in this post. I hope by having another tool to understand Twilio’s API, we’ve inspired you to explore building with Twilio. As always, we can’t wait to see what you build. Alex Chun is a Principal Solutions Engineer at Twilio.org. Alex Chun partners with social impact organizations to create tech solutions for communication workflows. His areas of interest are hotlines, technologies for helping climate changes, and animal rescues. He is currently working on solutions using Twilio technologies like Flex, Frontline, and WhatsApp. You can reach him at achun [at] twilio.com.
If your product offers enhanced voice or messaging capabilities to your customers, you are aware of increased requirements to capture details about their business and use case. These requirements are needed for you to use certain communication channels, such as Verified Toll-Free or US A2P 10DLC messaging. These efforts are a part of broader industry regulations referred to as Know-Your-Customer or KYC for short. Over the past decade, carriers and regulators around the world have been diligently working to create a more trusted and verified messaging and voice ecosystem for businesses to engage with consumers to protect consumers from spam, fraud, and bad actors. While there are currently different requirements based on channel, country, and number type, we expect to see a continued shift towards more KYC requirements around the world. Fortunately, Twilio has a solution to help customers navigate the current requirements, comply with new channel registration requirements, anticipate future needs, and enable additional features for voice and messaging using the KYC information they collect: Trust Hub. Using Twilio’s Trust Hub through the Console or our API, developers can submit the necessary information for a single customer profile or business identity and use it to register for multiple trusted communications channels on behalf of their end users. Consider your new user onboarding process: is it self service? If so, this post is for you. We’ll look at ways to capture necessary details from your end users at sign up, submit the information to Twilio Trust Hub, and tee your customers up for success with Twilio’s trusted voice and messaging products. Why leverage Twilio KYC products in your user onboarding funnel? Over the past few years, the telecommunications landscape underwent a massive phase change regarding how businesses contact their customers. Starting with who those businesses are, their intended use-case, and whether they have consent from consumers to receive their messages and voice calls, communications look a lot different than they did in the past. Already within the US, customers wishing to use local and toll-free numbers for notifications and marketing SMS/MMS messages must either register their local numbers for US A2P 10DLC or have their Toll-free numbers verified in order to start sending messages from these sender types. While number registration is now mandatory in the US for A2P 10DLC and verified Toll-Free messaging, you can expect that similar requirements may roll out in other countries as well. Twilio’s aim with Trust Hub is to help you fulfill KYC or compliance mandates in other channels and locales by reusing or building on top of information contained within Trust Hub customer profiles. The one business profile to rule them all Direct vs. ISV You should first determine whether or not you are a direct customer or Independent Software Vendor (ISV). Direct customers utilize Twilio for their businesses’ own communications needs. ISVs, or independent software vendors, are Twilio customers who utilize Twilio APIs to support their own software solutions. ISVs then sell this software to their own clients, who will each have their own uniquely branded communications, be it via messaging, voice, or another channel such as email. If your business fits into the ISV category, then you will have the additional requirement to create a Primary Business Profile in Trust Hub within the Console, along with a Secondary Business Profile which can be submitted via API for each additional client you need to support. What KYC data should you collect and submit to Trust Hub? Often within a product’s account sign up process, various information about the user or business is captured, regardless – for example, First Name, Last Name, Address, Email, Phone Number, etc. Typically, this information is needed for account creation, billing setup, user verification, account notifications, opting into product marketing campaigns, etc. If you’re an ISV, creating and submitting a Secondary Business Profile via API is a multi-step process. Let’s take a look at the high-level flow and what you need to submit. For the full list of parameters and their accepted values, be sure to review our full documentation (you'll also find a larger version of the below flowchart). Business Information First, you will need to capture and submit basic information about the business or customer for which you are trying to create a compliant Secondary Customer Profile on Trust Hub. Many of these will be predetermined accepted values through the Trust Hub API which you will need to select from, others will be freeform strings. See full documentation for required Business Information needed for a compliant Secondary Customer Profile. Business Identity (required) e.g., Direct_customer Business Type (required) e.g., Limited Liability Corporation Business Industry (required) e.g., Retail Business Registration ID Type (required) e.g., EIN Business Registration Number (required) e.g., 12-3456789 Business Regions of Operations (required) e.g., USA_AND_CANADA Website Url (required) e.g., test.com Authorized Representatives Next, you will need to submit some basic contact information for authorized representative(s) of this company. See our full documentation for required details for Authorized Representative(s) needed for a compliant Secondary Customer Profile. Last Name (required) e.g., Doe First Name (required) e.g., John Email (required) e.g., johndoe@email.com Business Title (required) e.g., ceo Job Position (required) e.g., CEO Phone Number (required) e.g., +11112223333 Address See full documentation for required details for Address information needed to compliant Secondary Customer Profile. Address Friendly Name (required) e.g., Headquarters Country (required) e.g., US Street Address 1 (required) e.g., 123 Main St City (required) e.g., San Francisco State/Province/Region (required) e.g., CA Postal Code (required) e.g., 94016 Once you have gone through the necessary API steps to create a Secondary Customer Profile with the above information, you will then submit the profile to Twilio to review. After a Business Profile has been submitted, it is vetted by our expert operations team. This process can take up to 72 hours. The team will confirm the supplied business information and then approve or reject the submission. We will notify you via email about the outcome of the vetting process and the status of the Business Profile. During the vetting process, Twilio may reach out to the authorized representatives included in your Business Profiles to confirm your business identity. Next, let’s take a look at which Trust Products you can enable using an approved Trust Hub Business Profile. Enable Trust Products US SMS/MMS Messaging US A2P 10DLC Registering a campaign for US A2P 10DLC requires that you first register an A2P brand. Twilio uses the information within a Trust Hub customer profile to register a brand for A2P messaging. This will go into determining messaging throughput and the types of campaigns your users will be able to register. If you have already undergone A2P 10DLC registration for your company or your customer’s, then there’s a good chance you already have the necessary Trust Hub customer profiles necessary to register for other trust products as well. Verified Toll-Free Messaging In order to send SMS/MMS from a US Toll-Free number, customers are required to verify their Toll-Free Numbers in the console or via API. Similar to A2P 10DLC brand registration, a Trust Hub customer profile is required to associate the basic information about a business with a specific Toll-Free number along with additional information Enhanced Voice Capabilities Shaken/STIR Signature-based Handling of Asserted Information using toKENs and Secure Telephone Identity Revisited ( "SHAKEN/STIR" or "STIR/SHAKEN" for short) are two of the latest industry protocols for voice; these technologies aim to combat caller ID spoofing and verify the authenticity of voice calls placed to subscribers. To learn more, see: Trusted Calling with SHAKEN/STIR. Twilio can enable SHAKEN/STIR on Programmable Voice in customer’s call flows, allowing them to place more trustworthy calls while improving answer rates. Shaken/STIR implementation on Twilio Programmable Voice requires the creation of a trust product, which is based on the information contained within a Trust Hub Business Profile. Prevent Spam Labeling with Voice Integrity (Public Beta) Have you ever received a phone call with an ominous “Spam” or “Scam Likely” label applied to it? This is because unwanted spam and robocalls have become much more prevalent in recent years, and carriers and handset makers are cracking down with features such as these to warn subscribers of potential spam calls. However, if legitimate businesses aren’t taking the necessary steps, their calls may be labeled as “Spam” or “Scam Likely” as well. Twilio announced the Public Beta Availability of Twilio's Voice Integrity to Remediate Spam Labels, a feature that allows businesses to submit information about their company and use case to analytic vendors who monitor and apply spam labels to calls placed over carrier networks. Once again, to enable this feature and increase the trustworthiness and consumer answer rate of outbound voice calls, customers will need to use an existing Trust Hub Business profile and create a Voice Integrity trust product. Branded Calling (Public Beta) Twilio recently announced Public Availability of Branded Calling, which allows customers to add a display name to outbound calls. This offers a unique branded call experience for businesses. Along with some of the aforementioned voice enhancements such as SHAKEN/STIR and Voice Integrity, Branded Calling will help to increase the reputation and trust of voice calls placed through Twilio Programmable Voice. You guessed it – to create and enable a Branded Calling trust product, customers will need an approved primary or secondary customer profile within Trust Hub. Complete your self service onboarding process with User Authentication and Identity Self service onboarding is a great time and opportunity to implement some form of user verification. At Twilio, we think about this in terms of a broader category: User Authentication and Identity. To us, this means confirming the authenticity of user-provided contact information, reducing the friction to authenticate legitimate users, and preventing unwanted or bad actors from exploiting your account sign-up process. Within User Authentication and Identity, Twilio has two primary product offerings: Lookup and Verify. Lookup API Using the Lookup API, Twilio customers can submit user-provided data (such as a phone number) and receive an API response indicating whether it’s an SMS-capable mobile phone number or incompatible landline. Verifying this type of information on the front end is very valuable if you want to ensure you can reach your customers with SMS account notifications or marketing campaigns. Better yet, it allows you to implement handlers on your sign-up forms, which can ask users to provide alternative values, for example, instruct a user to provide an SMS-capable number or invalidate their submission of a landline or VOIP phone number if you so choose. Additional payloads can be requested as well, such as Identity Match: this allows you to pass information about the user (e.g., Firstname Lastname) and in conjunction with a provided phone number, check data provided by authoritative sources (e.g., mobile carriers, government agencies) to help determine if a person is in fact the owner of that phone number. Verify API With Verify, Twilio customers can take advantage of a purpose-built user verification API to deliver One-Time Passwords over multiple channels such as SMS, WhatsApp, Email, and Voice. Verify offers built-in code delivery and validation logic without the need for a customer to host this infrastructure themselves. In addition, users of Twilio Verify can customize various aspects of their OTP messages with adjustable timeouts and failover logic, as well as the option for custom message templates and codes. Verify is built on top of Twilio’s Super Network and offers a shared-number pool (which also reduces the overhead of number management) which eliminates guesswork for customers and ensures OTP messages are delivered over optimal routes and senders regardless of destination country. Enhanced Capabilities: Fraud Guard and SNA. When setting up a Verify service on your Twilio account, you will immediately have access to Verify Fraud Guard, a default feature that proactively detects SMS Pumping Fraud, preventing suspicious OTP (one-time-passcode) requests over SMS. This can be adjusted to basic, standard, and max levels of protection depending on your preference. Sign-up forms which collect mobile numbers for purposes of SMS OTP are often exploited by bad actors who supply fraudulent numbers and request OTP codes over and over to collect on any associated delivery fees. This is known as Toll Fraud, and it can cost companies a lot of money unless they take preventative measures. To learn more see: Twilio Verify Fraud Guard: 100% guaranteed protection against SMS Pumping Fraud and our Verify Fraud Guard documentation. Silent Network Authentication, or SNA, is an exciting new form of user authentication which can be implemented in the background of your application (read: silently) to verify a user is in possession of their provided phone number. Twilio works behind the scenes with carriers to confirm information about the end user, their device, and its network authentication. This happens in a way that is more secure than OTP over SMS, while also providing a frictionless user experience compared to traditional means of a user being asked to verify their phone or email address. Users do not need to input a code manually. To learn more see: What is Silent Network Authentication? and our Verify SNA documentation. Conclusion As you’ve learned, there are several pieces of information to capture and submit to Twilio Trust Hub if you want to enable your customers with enhanced messaging and voice products. Each business or brand you support will assemble its Business Profile within Trust Hub. That profile will be used as the foundation and starting point to register for a number of trusted communications channels, such as US A2P 10DLC, Verified Toll-Free messaging, and SHAKEN/STIR for voice, to name a few. And as we mentioned, we expect creating high-quality profiles in Trust Hub will help you with anticipated future KYC requirements. Finally, we looked at ways to harden your self-service flow with User Authentication and Identity products such as Lookup and Verify. For more comprehensive information about Trust Hub, please see our Trust Hub Overview documentation as well as links to the products it supports. Casey Runnells is a Senior Solutions Engineer at Twilio and is passionate about helping businesses grow and redefine their strategies around enhanced customer communication and data. You can reach Casey at crunnells [at] twilio.com
Let me get this out of the way, first: no, this is not a blog post about chatbots! Chatbots are great, but let’s get visual – what if customers could send in photos from their mobile devices instead of typing out text? The old adage that a “picture is worth a thousand words” still rings true, and new AI capabilities promise to supercharge how customers interact with businesses. New AI Models such as OpenAI GPT Vision have the ability to take images as input. While this capability is emerging, it is clearly powerful and has many potential business applications. In this blog post, we will spin up a proof of concept using Twilio, OpenAI, and AWS. Twilio’s Messaging platform with MMS is the natural choice to enable your customers to interact with images. The OpenAI Vision model is exciting and gives your business the ability to review, classify, and understand images programmatically. AWS provides the infrastructure to coordinate events and data. This blog post is split into two parts: Multiple examples of how businesses could use this capability. Instructions to configure and deploy this proof of concept. The ‘how’ is more interesting. We will start there to get your gears spinning on how your business can programmatically understand customer images with computer vision. Part 1: How can businesses use this capability with Twilio? This proof of concept is straightforward to use. Just take any MMS-enabled device and send images to your Twilio number. You can include a keyword along with your image to trigger different kinds of analysis. For example, I can send an image of a damaged automobile along with the word “insurance” and that would trigger a response back with an assessment of the damage. Or, imagine a customer sending a screenshot of an application error and receiving a response telling you how to fix it. Or, perhaps, a customer could send a picture of a damaged product to kick off a return process. Using images to trigger helpful solutions has the possibility to resolve customer inquiries much quicker than we can today. While you are going through the examples, be sure to remember that the results from the AI image analysis can be handled programmatically and trigger additional events (actions) and replies based on the image and the specific state of the customer. Learning by video can be effective. Please feel free to reference this video at any part of this exercise: .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Picture and keyword examples In the next section, I will show you some example uses for the solution we’ll build in this post. In each, I’ll include the keyword for context, explain the scenario, show the prompt to OpenAI, and show the response I received while testing. Keyword: screenshot Imagine you are stuck using a software application and instead of calling or chatting with a bot or an agent, you start by sending in a screenshot that could determine where in your application a customer is stuck and pull out any errors or warning. The understanding of the screenshot would certainly give a support agent valuable context to begin a conversation and in some cases, perhaps trigger a fix to solve the customer’s issue instantly without any human agent involvement. Prompt: Is this image a screenshot? If yes, is there a warning message? Respond with a yes or no regarding if there are warning messages. Summarize the messages in less than 15 words. Yes, this image is a screenshot. Yes, there is a warning message. The message indicates that an SMS number is unregistered. Keyword: dog Next, let’s look at a simple analysis: send in a photo and determine if there is a dog in the photo. If there is, identify the breed. Return the response in JSON format so a developer can use your API to identify dogs and dog breeds. Throughout all of these examples, imagine the response from OpenAI being handled programmatically and triggering additional steps in your business systems. How could this be used? Imagine a veterinarian office requesting a photo of a new canine patient. Prompt: Is there a dog in this image? If yes, determine the breed. Give your response in JSON where the is_dog variable declares whether a dog is present and the breed variable is your determination of breed. { "is_dog": true, "breed": "Golden Retriever" } { "is_dog": true, "breed": "Catahoula Leopard Dog" } { "is_dog": false, "breed": null } Keyword: people How could this be used? For numerous reasons, it could be helpful to know whether an image contains people – or, really, anything else. Depending on the answer, your system can process accordingly. For example, if there are people in the photo, do not process it further. Conversely, your business rules may require “something” to be present in order to proceed – the AI model can make this determination. Going further, your business may want to screen or prevent images with certain objects or characteristics from additional processing. Your prompt could check for these forbidden objects and then PASS or FAIL. Prompt: Are there people in this image? If yes, how many? Yes, there are people in this image. There appear to be ten individuals standing side by side with their arms raised against the sunset. Keyword: retail How could this be used? The applications in retail are numerous and span from returns, to sales, to upsells, to brand awareness and more. To start, what could you do for a customer after identifying and classifying an item they send in? Prompt: Are the clothing items in this image from the Men's or Women's Department? What type of clothing is it? The clothing item in the image is a polo shirt. While this particular style of shirt can be worn by both men and women, considering the straighter cut and broader shoulders, it is typically designed for men's fashion. Polo shirts are characterized by their collars, a few buttons at the neck, and often short sleeves. They can be worn in a casual or semi-formal setting. The clothing item in the image appears to be a women's dress with a gingham pattern. It features long, flared sleeves, which is a style often found in women's fashion. The dress has a short length and looks to have a fitted silhouette. Keyword: recommend How could this be used? OK, we saw a basic retail prompt. What about a more advanced scenario, being able to recommend products? Remember, a production implementation would include a trained model to recommend from a specific catalog! What if you could recommend something based on a color or a theme in the submitted image? Prompt: Please recommend some products that go with the product in this image. The image shows a person wearing a gray polo shirt. When pairing items with a polo shirt for a smart-casual look, you might consider the following products: 1. Chinos – A pair of chinos in a neutral color like beige, navy, or black can complement the polo shirt for a classic and versatile outfit. 2. Jeans – For a more casual look, a pair of well-fitted jeans, either in a dark wash or light blue, would go well with the polo shirt. 3. Shorts – In a warmer climate or during summer months, pairing the polo shirt with some smart shorts, such as khaki or linen shorts, would create a comfortable and stylish ensemble. 4. Casual Jacket – Layering the polo with a lightweight casual jacket like a Harrington, bomber, or blazer can elevate the look for evening wear or cooler days. 5. Sneakers – Clean, minimalist sneakers would match the casual yet put-together vibe of the polo shirt. 6. Loafers or Boat Shoes – For footwear that leans towards the smart-casual side, loafers or boat shoes would be an excellent match for the polo shirt outfit. 7. Belt – A leather belt in a color that complements the shoes and pants would be a subtle but polished addition to the outfit. 8. Watch – A classic watch with a leather or metal band can accessorize and refine the overall appearance. Remember that the choice of accessories and companion clothing items should be guided by the occasion, location, and personal style preference. Keyword: insurance How could this be used? Shifting gears, could computer vision be helpful in analyzing insurance claims? How about starting a claims interaction by sending in photos of the damage? Prompt: Does this image show damage to a vehicle? If yes, where is the damage and what type of vehicle? Yes, the image shows significant damage to a vehicle. The vehicle appears to be a red car, and the damage is severe on its side, particularly around the rear wheel area. The rear wheel is bent outwards, and there is extensive denting and crumpling of the bodywork on the rear door and rear quarter panel. The rear part of the car, likely the bumper, is also visibly damaged and possibly separated from the rest of the vehicle structure. The window of the rear door is shattered or missing, and a plastic sheet has been taped over it, likely to cover the opening and prevent further interior damage or weather exposure. Yes, the image shows a damaged vehicle. It is a car that has sustained significant damage to the rear end. The trunk area is crumpled, with visible deformations and the bumper is nearly detached. The rear lights appear to be broken or missing, and the hood is also open, although it's unclear if this is related to the damage or if it has been opened for inspection. The paint on the damaged areas is visibly scrapped and flaked off. Other vehicles can be seen in the background, but they do not appear to be damaged. Keyword: repair How could this be used? How about household repairs? Sending in an image might enable self-help, quickly route requests to a specialist, or could potentially identify a dangerous situation. Prompt: Is there any appliance in this image and if yes, what type of appliance is in the image? Is there any damage to the appliance? Yes, there is an appliance in this image—it's a stainless steel refrigerator with a water and ice dispenser located on the freezer door. From the angle and quality of this image, there doesn't appear to be any obvious damage to the appliance; it seems to be in good condition. However, a thorough inspection would require different angles and perhaps a closer look to confirm its state with certainty. Yes, there is an appliance in this image. It appears to be a water heater, which is typically used in residential or commercial buildings to heat water. From the visible parts, the appliance seems to be intact and not showing any obvious signs of damage, but without a closer inspection, it's not possible to definitively conclude whether there's any internal damage or operational issues. The area around the water heater is a bit cluttered with debris, but the unit itself doesn't show any clear signs of external damage from the angle provided in this photo. Keyword: tool How could this be used? Ever get stuck trying to assemble or fix something? Prompt: What type of tool should I use for this screw or bolt? The screws in the image have a recessed cross, which is characteristic of Phillips head screws. To drive or remove these screws, you would typically use a Phillips head screwdriver that matches the size of the screw head. Ensure you select the right size to prevent stripping the screw head. The hardware shown in the image includes hex head bolts and hex nuts. To work with these, you would typically use a wrench or a socket set. For the bolts, a socket wrench or a spanner of the correct size would be ideal to turn the hexagonal heads. It's important to choose the correct size to avoid stripping the heads. For the nuts, you can use a similar socket wrench, nut driver, or spanner to turn them onto the thread of a bolt. Make sure the size of the tool matches the size of the nuts for a proper fit. Keyword: returns How could this be used? Handling returns and complaints about consumer products is another area with huge potential. Submitted images could trigger a response or actions which quickly address customer concerns while providing powerful feedback. Prompt: What type of product is in the image? Does there appear to be any damage to the product in the image? Give a concise response. The image shows pasta with a broken glass container. The pasta appears to be unharmed, but the container is shattered, which suggests it cannot be used to hold items safely anymore. The image shows a pair of camouflage cargo shorts. There are apparent signs of wear and tear, such as frayed edges, a few holes, and what may be paint stains or discolorations. These could be intentional design elements to give the shorts a distressed look, or they could be the result of actual damage from use. Keyword: ingredients How could this be used? Enjoyed your meal? Want to know the ingredients? Prompt: Please identify the ingredients in this meal. The meal in the image appears to be a rice-based dish accompanied by a variety of ingredients. Here's what I can identify: 1. Cooked rice – It looks like the base of the dish is made from rice, possibly brown rice due to its color and texture. 2. Eggs – There are two halves of a boiled egg with a soft yolk. 3. Green vegetables – These could be snap peas or sugar peas, and there are also slices of what appear to be zucchini or cucumber. 4. Leafy greens – On the side, there's a garnish that looks like fresh parsley or cilantro. 5. Onions or shallots – Some white, round pieces that could be cooked onions or shallots are mixed with the rice. The overall dish could be a type of vegetable stir-fry with rice or a rice bowl topped with various vegetables and a soft-boiled egg. The ingredients seem to combine elements of a healthy, balanced diet. Keyword: category How could this be used? Lastly, how about categorizing the submitted image and providing a description?. Prompt: Can this image be categorized as a photograph, a cartoon, a drawing, or a screenshot? Give your response in JSON where the category goes in the category variable and then add a description variable and give a concise description of the image. Respond with a JSON object with category and description properties. { "category": "screenshot", "description": "The image appears to be a schematic or architecture diagram. It includes various service icons and labels such as Twilio Messaging, Webhook, Microservice, AWS services (API Gateway, Lambda, S3, SNS), Data Storage, and OpenAI's gpt-4-vision-preview. There are arrows indicating the flow of information or process steps between the different services and components." } Wow! Building this proof of concept and working with OpenAI’s Vision model has been eye-opening for me, and hopefully these examples have inspired some use cases for you. Using images submitted by your customers truly has tremendous potential to supercharge how you engage. Go to part 2 to see how you can spin up this proof of concept application using Twilio, OpenAI, and AWS! Part 2: Configure and Deploy Learning by video can be effective. Please feel free to reference this video at any part of this exercise: .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Here is a blueprint of the proof of concept with details of each section below: Twilio handles inbound MMS and SMS messages (WhatsApp messages are handled the same way), and routes them to AWS. AWS handles the event-based system which includes calling OpenAI’s gpt-4-vision-preview model for image analysis. 1. Inbound MMS and SMS As a Leader in CPaaS according to Gartner’s 2023 Magic Quadrant, Twilio is an excellent choice for your enterprise messaging. Your Messaging Senders (Long Codes, Toll Free Numbers, Short Codes, WhatsApp) in Twilio can be configured to route inbound messages to a Webhook of your choice. Once a webhook is set, messages will be posted to your endpoint in real time. 2. Queue, Security Check and Initial Processing AWS API Gateway will allow you to create an endpoint to receive webhooks from Twilio. To handle spiky demand, this proof of concept has a direct integration between API Gateway and AWS SQS (Simple Queue Service). In addition to protecting your system from demand spikes, a queue could also be used to control how quickly you send images to analysis. An AWS Lambda function will pull messages from the queue at a set rate. This lambda will first check to make sure that the message actually came from your Twilio Account. Next, the function sends the entire webhook message to an AWS SNS topic for additional processing while also pulling out media files and individually sending them to a different SNS topic for analysis. For production uses, security should be much more comprehensive than validating that the message came from Twilio. Using AI is expensive, so you will want to make sure that you have in place both security as well as solid business rules to determine which images should be analyzed. 3. Call OpenAI The Lambda marked with #2 is responsible for formatting the prompt used to send along with the image. This proof of concept has several examples, but you will certainly want to engineer your own prompts to meet your needs. This lambda uses a Layer to hold the OpenAI libraries. Architects may protest that this Lambda function will have to wait for a response from OpenAI, and that could be wasteful. I completely agree, but this is just a proof of concept. A production solution will likely want to use a different compute solution to call OpenAI. AWS Bedrock could be an option. 4. Additional Processing This proof of concept has two processors. The Lambda marked with #3 reads in the message and then saves it to a S3 bucket. There is an additional lambda that reads in responses from OpenAI and calls the Twilio API to send an SMS containing the OpenAI response back to the “customer”. You can think of that lambda occupying the “Application” box above. For real use cases, you will tie in your own applications and data systems and build interactivity. Images sent into this system could trigger additional events based on the understanding of those images which, in turn, trigger other system events. Prerequisites This is not a beginner level build! You need to have some knowledge of Twilio, AWS, and OpenAI to complete this tutorial. Twilio Account. If you don’t yet have one, you can sign up for a free account here. A SMS and MMS enabled phone number in your Twilio Account. Note that different countries have different registration requirements for utilizing phone numbers. You’ll need to fulfill the requirements for the number you purchase before continuing the demo. OpenAI Account and an API Key with access to the gpt-4-vision-preview model, or some model (or future model after this post is published) that has the ability to analyze images. AWS Account with permissions to provision Lambdas, step functions, S3 buckets, IAM Roles & Policies, an SQS queue, SNS topics, and a custom EventBus. You can sign up for an account here. AWS SAM CLI installed Let’s Build it! Here are the basic steps of our serverless multichannel build today. Download the code and enter your API Keys Deploy the stack Set the Twilio Webhook 1. Download the Code for this Application Download the code from this repo, and then open up the folder in your preferred development environment. The repo contains all you need to spin up an AWS CloudFormation stack. First, we need to install a couple of node packages. From the parent directory, cd into the two directories listed below and install the packages. Here are the commands: $ cd layers/layer-openai/nodejs $ npm install $ cd ../../layer-twilio/nodejs $ npm install $ cd ../../.. Next, open up the file template.yaml in the parent directory in your favorite code editor. This yaml file contains the instructions needed to provision the AWS resources. For this step you will need your OpenAI API Key, your Twilio Account SID and your Twilio Auth Token. Use the FIND function and search for TWILIO_ACCOUNT_SID and replace the placeholder value with your value. Use the FIND function and search for TWILIO_AUTH_TOKENand replace the placeholder value with your value. The Twilio Auth Token appears in two places – be sure to replace both of them. Use the FIND function and search for OPENAI_API_KEY and replace the placeholder value with your value. Here is an example of what it will look like in template.yaml for the TWILIO_ACCOUNT_SID: 2. Deploy the Stack With those settings in place, we are ready to deploy! From a terminal window, go into the parent directory and run: $ sam build This command goes through the yaml file template.yaml and prepares the stack to be deployed. In order to deploy the SAM application, you need to be sure that you have the proper AWS credentials configured. Having the AWS CLI also installed makes it easier, but here are some instructions. Once you have authenticated into your AWS account, you can run: $ sam deploy --guided --stack-name WebhookMediaAnalyzerPOC This will start an interactive command prompt session to set basic configurations and then deploy all of your resources via a stack in CloudFormation. Here are the answers to enter after running that command (except, substitute your AWS Region of choice): Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: WebhookMediaAnalyzerPOC AWS Region [us-east-1]: <ENTER-YOUR-AWS-REGION-OF-CHOICE> #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: y #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: y Disable rollback [y/N]: N Save arguments to configuration file [Y/n]: y SAM configuration file [samconfig.toml]: SAM configuration environment [default]: After answering the last questions, SAM will create a changeset that lists all of the resources that will be deployed. Answer “y” to the last question to have AWS actually start to create the resources. Previewing CloudFormation changeset before deployment ====================================================== Deploy this changeset? [y/N]: The SAM command prompt will let you know when it has finished deploying all of the resources. You can then go to your AWS Console and CloudFormation and browse through the new stack you just created. All of the Lambdas, Lambda Layers, S3 buckets, IAM Roles, SQS queues, SNS topics are all created automatically. (IaC – Infrastructure as Code – is awesome!) Also note that the first time you run the deploy command, SAM will create a samconfig.toml file to save your answers for subsequent deployments. After you deploy the first time, you can drop the --guided parameter of sam deploy for future deployments. Back in your terminal window that you used to deploy this stack, the last output will contain the endpoint that you will need to enter in your Twilio Console to direct the webhook to the system you just spun up. Copy the endpoint in your terminal window as shown in the example below: 3. Set the Twilio Webhook Now you just need to configure that endpoint to one of your Twilio Phone Numbers. From the Twilio Console, go to Phone Numbers >> Active Numbers and select the number you want to use. On the Configure tab for that phone number, scroll down to Messaging Configuration. For A message comes in, select Webhook and paste in your endpoint from above into the URL field. Save your changes, and you are good to go! For this Proof of Concept, we are configuring a specific phone number directly. We encourage the use of Twilio Messaging Services for most use cases! To test it out, you can send any image without text to that phone number and it will use the default prompt, which will return a caption for the image. If I send in the Twilio logo, it should look something like this: Or, I can send in my favorite photo of my dog and get this: Things get more interesting when you send in text along with the images as that will initiate more complex prompts. The prompts I’ve added in the demo were covered thoroughly in part 1, so scroll back up to review the sample prompts with images! To recap, just send a text prompt along with your image. For example: ...and that will return: Here are the available prompts: dog => Is there a dog in this image? If yes, determine the breed. Give your response in JSON where the is_dog variable declares whether a dog is present and the breed variable is your determination of breed. screenshot => Is this image a screenshot? If yes, is there a warning message? Respond with a yes or no regarding if there are warning messages. Summarize the messages in less than 15 words. category => Can this image be categorized as a photograph, a cartoon, a drawing, or a screenshot? Give your response in JSON where the category goes in the category variable and then add a description variable and give a concise description of the image. Respond with a JSON object with category and description properties. text => Is there any text in this image? If yes, what are the first few words? insurance => Does this image show damage to a vehicle? If yes, where is the damage and what type of vehicle? retail => Are the clothing items in this image from the Men’s or Women's Department? What type of clothing is it? recommend => Please recommend some products that go with the product in this image. tool => What type of tool should I use for this screw or bolt? repair => Is there any appliance in this image and if yes, what type of appliance is in the image? Is there any damage to the appliance? people => Are there people in this image? If yes, how many? ingredients => Please identify the ingredients in this meal. returns => What type of product is in the image? Does there appear to be any damage to the product in the image? Give a concise response. default => Write a caption for this image that is less than 15 words. Using a request and response for this proof of concept where the system receives a MMS message and returns a SMS reply with the results from OpenAI is the most user friendly way to engage with this material. It is important to point out, though, that all of the JSON objects are stored in a S3 bucket. As you try this out in your own environment, be sure to view the JSON objects in the S3 bucket so that you are able to visualize how to consume these events in your system and turn them into actionable events and key data points for your customer personas. Here is a sample JSON object: Conclusion In this post, you learned how to spin up an AI image analysis-and-understanding proof of concept using Twilio Messaging, OpenAI, and AWS. The goal of this blog post was to get you excited about the possibilities of using images, AI, and Twilio Communications Channels. This post focused on MMS, but all of this could be used with Twilio’s WhatsApp Business API or email using Twilio SendGrid (here is another blog post about SendGrid’s Inbound Parse). The examples in this post cover many use cases and industries, and hopefully they have sparked ideas for your own business. This proof of concept should be straightforward to spin up in your own AWS environment and allow your company to start experimenting with the computer vision capabilities of AI and then turning those results into events which can then further build into delightful, personalized experiences based on images instead of text. I want to emphasize that last point. This proof of concept shows an image being submitted via MMS and then a response is returned via SMS. The analysis of these customer-submitted images can be turned into a deeper understanding of what your customer actually wants. The amount of information in an image dwarfs what your customers can convey in a text chat conversation. With an increase in data from your customers, your organization can reply with much more considered responses and, ultimately, superior engagement. A quick reminder: this is a proof of concept and the code base in the repo is NOT production ready. The use of lambda to call OpenAI is particularly inefficient as the lambda has to wait for a response. The code base is ready to help you try out your ideas and start building! When you are finished with the proof of concept, be sure to delete all of the AWS resources from the stack in CloudFormation in your AWS Console. Additionally, consider that you don’t control what your customers will send. Your production implementation should also consider PII, or Personally Identifiable Information, and handle input based on your company’s policies and the license and implementation of the AI model you ultimately choose. Note that GPT Vision is currently in preview and has limitations and costs. Be sure to do your due diligence to determine the best model to meet your needs. Exciting times! Happy building! Dan Bartlett has been building software applications since the first dotcom wave. The core principles from those days remain the same but these days you can build cooler things faster. He can be reached at dbartlett [at] twilio.com.
Introduction As the holiday season arrives, you might be thinking about how you can use your JavaScript skills to bring the gift of giving to your friends and family. Secret Santa events, where each person draws a name and acts as that person’s “Secret Santa” by giving them an anonymous gift, are a time honored tradition. In this article you’ll learn how to build an app with Twilio Programmable Voice and Airtable that facilitates managing participants and drawing names. App overview When a participant calls in to your Twilio phone number they will hear an automated message that prompts them to either add their name to the participant list, or to learn whose name they’ve drawn, depending on whether or not the organizer has already instructed the app to draw names. The exact prompts that the caller will hear are dependent on whether or not names have already been drawn and whether or not they are the organizer. Prerequisites To get started with this project you’ll need: A free Twilio account and Twilio phone number An Airtable account and basic familiarity with the interface A foundational understanding of JavaScript and Node.js Set up your Airtable base Create your base and tables In your browser, navigate to Airtable and create a new base called Secret Santa. Change the name of the default table from Table-1 to Participants. Erase all the provided columns except for Name, and add two more: Phone, of type Phone Number Match, of type Single Line Text Be sure to match the spelling and case of the column/field names as I have here so that they properly match what's in the code later on in this article. Create a second table inside your new base called Events. Likewise, delete all the provided columns except Name and then add three more: Organizer, of type Single Line Text Phone, of type Phone Number NamesDrawn, of type Checkbox Add preliminary data In the Events table, add the information for your Secret Santa event. Give it a name, like Secret Santa 2023. Under Organizer, add your name. Under Phone, add your phone number and be sure to include the + sign at the beginning and your country code. This is called E.164 format, and any time you type a phone number throughout this article it must be in this format. Here’s an example for a US based phone number: +15551234567. Leave NamesDrawn unchecked. Get your personal access token and base ID With your Airtable base open, look at the URL in your browser’s navigation bar. It should look something like this: https://airtable.com/appcTWmtcq5otMvnP/tblxxxxxxx/xxxxxxxxxx Your base ID is the part of that URL that begins with app and includes everything up until the next URL segment. In the above example it’s: appcTWmtcq5otMvnP. Copy your base ID and save it to a note or other document for safekeeping, you’ll need it in a subsequent step. Next, because this project involves using the Airtable API, you’ll need to get a personal access token to authenticate your app. Give your access token a generic name; something like Secret Santa is fine. Under the Scopes heading, click the + Add a Scope button and select both data.records:read and data.records:write. Under Access, select your Secret Santa base. Click the Create token button and a modal will pop up with your access token. Copy it and paste it somewhere safe immediately. This access token will only be displayed this one time. Once you close the modal, you won't be able to see it again. Be sure to store it, at least temporarily, in a document or note on your computer to access later on in this article. Set up your Twilio Functions Service Create your two functions Your app’s code will be hosted on Twilio Functions, which is a serverless environment running Node.js. In your browser, navigate to the Services page inside the Twilio Functions segment of the Twilio Console. Click the Create Service button and when prompted, enter a name for your service. This name should be short and clear, “secret-santa” for example. Toward the top left of the page, click the Add + button and then select Add function to add two new functions to your service. Name the first function /receive-call and the second function /insert. Next to each function name, click on the Visibility icon to set the visibility of /receive-call to Public and leave the visibility of /insert to Protected. Add your environment variables In the sidebar on the left, toward the bottom under the Settings & More heading, there is an option for Environment Variables. Click on this and add two environment variables. Take care to match the case and underscores that you see here: For the first, add key AIRTABLE_BASE_ID and paste your Airtable Base ID that you collected earlier in for the value, then click Add. For the second, add key AIRTABLE_API_KEY and paste your Airtable Personal Access Token that you created for the value, then click Add. Install the Airtable dependency Also under the Settings & More heading in the left sidebar, there is an option for Dependencies. Click on this. In the form field for Module, type airtable, and leave the field for Version blank. Click Add. This will install Airtable’s npm package for use when your function is deployed. Manage the call flow and handle user input Now that Airtable and your Functions service are set up, it’s time to start coding. To edit each function, click on its name in the left sidebar and the file will open in the code editor to the right of the sidebar. Delete all the provided default/boilerplate code in both of your new functions. Receive call scaffolding In /receive-call, paste in the following code: const Airtable = require('airtable'); exports.handler = async function (context, event, callback) { const base = new Airtable({ apiKey: context.AIRTABLE_API_KEY }).base(context.AIRTABLE_BASE_ID); const twiml = new Twilio.twiml.VoiceResponse(); /* HELPER FUNCTIONS */ /* MAIN */ return callback(null, twiml); }; This code is scaffolding for your /receive-call function. It imports the Airtable package, creates the function, and returns the required callback from the function. Collecting and managing input from participants Immediately above the /* HELPER FUNCTIONS */ comment line, add the following highlighted code: const Airtable = require('airtable'); exports.handler = async function (context, event, callback) { const base = new Airtable({ apiKey: context.AIRTABLE_API_KEY }).base(context.AIRTABLE_BASE_ID); const twiml = new Twilio.twiml.VoiceResponse(); const events = await base('Events').select({ fields: ["Name", "Phone", "NamesDrawn"], maxRecords: 1 }).all(); const secretSanta = events[0]; const namesDrawn = secretSanta.get("NamesDrawn"); const isOrganizer = event.From == secretSanta.get("Phone") ? true : false; /* HELPER FUNCTIONS */ /* MAIN */ return callback(null, twiml); }; This code uses the Airtable API to pull information about your Secret Santa event into your app, including the name of the event, the phone number of the organizer, and whether or not names have been drawn. Two variables are created using this information: namesDrawn and isOrganizer, both of which are boolean variables reflecting the state of the name drawing and whether the caller is the organizer, respectively. These variables will be used to manage the prompts that the caller hears. Next, under the /* HELPER FUNCTIONS */ comment but above the /* MAIN */ comment, paste in the following helper function: const gather = () => { let message = ''; if (namesDrawn) { message += `Welcome back to ${secretSanta.get("Name")}. Names have been drawn. To learn about who you've drawn, press 3.` } else { message += `Welcome to ${secretSanta.get("Name")}. Names have not yet been drawn. To participate in this secret santa event, press 1. ` if (isOrganizer) { message += `To draw names, press 2.`; } } const gather = twiml.gather({ numDigits: 1 }); gather.say(message); } This is the gather() helper function. It’s used to prompt a caller and collect a response from them, which is what Twilio calls gathering. Gathering is initiated by the twiml <Gather> command, demonstrated on the highlighted lines above. When the <Gather> command is run, the user is prompted to press a digit(s) on their phone, and those digits are captured. To process the captured digits, you can either provide a callback URL or, by default, the current function will be called again with the collected digits now available on the event.Digits property. Digits is a built-in Twilio property that contains the actual gathered digits. In the code above, a prompt string is created based on the values of the namesDrawn and isOrganizer variables. By way of the twiml.gather() command, this message is then spoken to the caller by an automated voice using the nested <Say> verb. When the caller enters a digit in response, the entire /receive-call function will run again and the code can process the user supplied digit. To add the code for this processing, under the /* MAIN */ comment but above the callback, paste the following: if (event?.Digits) { switch (event.Digits) { case '1': if (!namesDrawn) { recordName(); } else { twiml.say("Names have already been drawn and this event is not open to new participants.") } break; case '2': if (isOrganizer) { await drawNames(); } else { twiml.say('You are not authorized to draw names'); } break; case '3': if (namesDrawn) { await revealName(); } else { twiml.say('Names have not yet been drawn.') } break; default: twiml.say("Sorry, I don't understand that choice."); twiml.pause(); gather(); break; } } else { gather(); } The if-statement shown checks for the Digits property. If it doesn’t exist, then digits still need to be gathered, and so the gather() helper function will be called. If there is a Digits property, then the collected digits need to be evaluated. The switch-statement inside the if-statement does this evaluating, and branches the code based on the entered digit. Add the helper functions The switch-statement above references three additional helper functions: recordName(), drawNames(), and revealName(). In this step, you’ll add the code for each of these helper functions. Directly below the gather() helper function, add the following code: const recordName = () => { const gather = twiml.gather({ input: 'speech', action: '/insert' }); gather.say('Please say your name.') } This code creates the helper function recordName(). It’s run when a caller selects the prompt to add their name to the participant list stored in Airtable. It also uses the twiml.gather() command to collect input from the caller. This time, however, the input is in the form of speech not a pressed digit, and the input will be processed at the provided action URL, /insert (the other function you created earlier). You’ll deal with that function in a later step. Next, below the new recordName() function, add the following code: const drawNames = async () => { twiml.say('Now drawing names, please hold'); try { const participants = await base('Participants').select({ fields: ["Name", "Phone"] }).all(); let matches = participants; const makeMatch = currentParticipantId => { const availableMatches = matches.filter(match => match.getId() !== currentParticipantId); const random = Math.floor(Math.random() * availableMatches.length); const match = availableMatches[random]; matches = matches.filter(m => m.getId() !== match.getId()); return match; } for (let participant of participants) { const match = makeMatch(participant.getId()); const participants = await base('Participants').update([ { "id": participant.getId(), "fields": { "Match": match.get("Name") } } ]); } const events = await base('Events').update([ { "id": secretSanta.getId(), "fields": { "NamesDrawn": true } } ]); twiml.say("Names have been drawn. Thank you.") twiml.hangup(); } catch (err) { console.log(err); twiml.say("There was an error. Please try again.") } } This code is what manages the drawing of the names/assigning of the Secret Santas. It pulls the participant list from Airtable and then randomly assigns each participant a match that is not themselves and not already a match to someone else. After assigning a match, it uploads the match’s name to the participant’s record in Airtable. When all the Secret Santas have been paired with a giftee, the code changes the value of the Names Drawn field in the Events table to checked (or true). Next, below the drawNames() function, add the following code: const revealName = async () => { const match = await base('Participants').select({ fields: ["Match"], maxRecords: 1, filterByFormula: `{Phone} = "${event.From}"` }).all(); twiml.say(`Your match is ${match[0].get("Match")}.`); twiml.hangup(); } After the names have been drawn, participants can call in and find out who they are a Secret Santa to. This is handled by the revealNames() function above. The app, as described in this article, does not accommodate wish lists or other information. But you could experiment with what you've learned by adding a Wish List field in Airtable and having participants record their wishes when they add their name. The wishlist could then be repeated to the Secret Santa when they call in to find out about their match. That’s all for the /receive-call function. Click the Save button below the code editor, and then open your /insert function. Insert recorded speech to the Airtable base In the /insert function, add the following code: const Airtable = require('airtable'); exports.handler = async function (context, event, callback) { const twiml = new Twilio.twiml.VoiceResponse(); const base = new Airtable({ apiKey: context.AIRTABLE_API_KEY }).base(context.AIRTABLE_BASE_ID); const newParticipant = { "Name": event.SpeechResult, "Phone": event.From } try { const record = await base('Participants').create(newParticipant); console.log(record.getId()); twiml.say("Thank you for participating in this Secret Santa event. You are now signed up.") twiml.hangup(); } catch (err) { console.log(err); twiml.say("Your name was not saved. Please try again.") } return callback(null, twiml); }; This code is called when a caller has elected to add their name to the participant list. When prompted, they will say their name aloud. The twiml.gather() command will record this speech and transcribe it. When the transcription is complete, this function will be called. The SpeechResult property will be available on the function’s event object and it will contain the transcription string. It is this string that will be added to the Name field in the Participants table in Airtable. Before the call ends, the caller will hear a confirmation message that they are signed up, or alternatively, hear an error if their name was not properly inserted in the Airtable table. Click the Save button at the bottom of the code editor. You’re now done with the coding aspect of this project! At the bottom of the page, click Deploy All to deploy your app and set it live. Set up the webhook The final step is connect your phone number to your Functions service. Navigate to the Active Phone Numbers section of the Twilio Console and click on your Twilio phone number to configure it. Scroll down to the heading Voice Configuration. Under A call comes in, select Function. Under Service, select your Functions service. Under Environment, select ui, and finally under Function Path, select your function /receive-call. Scroll down to the bottom of the page and click Save Configuration. Test your Secret Santa app It’s time to test it out! Call your Twilio number and follow the prompts to add a name to your participants list. You can also add some names and numbers directly to your Airtable. Once you have a few names, call back and follow the prompts to draw names. Look in Airtable and you’ll see the matches! You can call back once again to hear your match’s name revealed as well. I hope you had a great time using Twilio Programmable Voice and Twilio Functions to build this Secret Santa app. Experiment with it by expanding its functionality through wishlists, multiple events, or other fun ideas! I can’t wait to see what you build. Ashley is a JavaScript Editor for the Twilio blog. To work with her and bring your technical stories to Twilio, email her at aboucher @ twilio.com. If you can’t find her there, she’s probably on a patio somewhere having a cup of coffee (or glass of wine, depending on the time).
Company Name: Paradox Founder Name(s): Aaron Matos LinkedIn: Paradox Company Page Industry: Talent Acquisition Technology Stage/Funding: Series C, $200 Million USD Location: Scottsdale, Arizona This is a post in a series of interviews with startup founders who have worked with Twilio Ventures. Twilio Ventures invests in companies building tomorrow’s solutions for developers and customer engagement applications. We had the pleasure of working with Aaron Matos on their startup, Paradox. Describe Paradox’s startup journey in 160 characters or less (the original length of a text message) Paradox is a leading conversational AI innovator, helping 700+ global clients get recruiting and HR work done faster through frictionless experiences people love. What does Paradox mean and/or how did it originate? We’ve always believed that recruiting is a people game – and if you get the people thing right, you can build teams that change the world. The problem, however, is that recruiting at most organizations is plagued by software, systems, and processes that slow things down and create bad experiences for candidates, recruiters, and managers. The net result: Inefficiencies that hurt hiring productivity and make companies less competitive for the best candidates on the market. We’re obsessed with the idea that conversational AI – or what we sometimes call “assistive intelligence” – can transform that paradigm by fundamentally transforming how we think about the core systems that drive enterprise hiring (ATS, CRM, the career site, etc.). We see a future where enterprise software feels invisible and conversational experiences become the new UI – automating repetitive tasks to help people spend more time with people, not software. That’s ultimately the mission that influenced our name — the paradoxical idea that we can create software that helps us spend less time... with software. What have you enjoyed most about building Paradox? For us, it’s all about building an amazing team that’s driven by helping our clients drive real results. One of our values is that we measure client success in hugs. For some, that’s a weird thing to say, but we take it to heart. We don’t celebrate until we see a client achieve the business impact they set out to achieve when they partnered with Paradox. When a client goes live and they first start to see the impact of their transformation, that’s what we live for. When they see candidates chat with their assistant, thank them for being helpful, schedule an interview in seconds (and in some cases get hired that same day) – their eyes light up, and you can really see them grasp how this could change the way they hire forever. What is one decision you made building Paradox that you would make differently today? I don’t worry too much about what we could have done differently. I worry about what we can do today that will make us better tomorrow. Some of that comes from learning from previous mistakes, but we try to think more practically with a growth mindset. We take responsibility for our successes and failures, and our curiosity and perseverance drive us every day. I could second guess a lot of things we’ve done, but I’m not sure it’s worth wondering what I would’ve done differently in that exact moment. It’s easy to play revisionist history, but I’m more focused on what we can learn from the decisions we did make and how we can use that to become better professionals and people. What should potential customers know that is unique about Paradox? We’re a product and a client-focused company first – one that’s driven by how we can build things that help our clients solve real business challenges. We’re not about flash or buzzwords, or promoting our own success. We’re obsessed with truly transforming the industry and organizations we serve. We do believe we bring a new lens and frame to enterprise recruiting and HR technology that’s helping our clients completely re-think what’s possible, and that frame centers around some key pillars: Transform the experience: We believe it’s possible to deliver true consumer-like software experiences in hiring, onboarding, and employee engagement – experiences that differentiate against legacy systems by fully embracing a mobile-first, conversational-led philosophy every step of the way. Hire faster with less work: Paradox helps eliminate bottlenecks and friction in the process, automating work people might’ve otherwise needed to do – giving time back to recruiters and getting candidates through the process faster. By returning time to hiring teams and managers, we’re helping clients reallocate headcount to focus on higher-value work. Convert more to spend less: Most organizations spend a ton of money to attract candidates — only to send them through traditional enterprise ATS systems that weren’t built around experience or conversion. By removing friction and building the experience around conversational experiences, we can help increase candidate conversion to reduce ad spend and create immediate value. How are you building on Twilio? E.g., how do you use Email, SMS, WhatsApp, Verify, etc. to communicate with your customers? We built on Twilio from the earliest prototype because of its scalable model and ease of use. We’ve always been a messaging-first platform, using SMS as the primary channel for communicating about a job. As our platform has expanded globally, Twilio was an obvious partner to use for SMS, and as our platform demanded more services such as Verify and video, audio and other services, Twilio has always been the first choice for us. Our business is built on the reliability and scalability of Twilio, so we were honored to have Twilio join us as an investor in our Series C fundraise as well. Why did you select Twilio Ventures as an investor? As we did our Series C, we wanted to continue our strategy of getting “smart money” around the table. Along with a fantastic coalition of financial investors, we were honored to have Workday Ventures and Twilio Ventures join as investors. Given our success building with Twilio and how core it is to our business, being closer with Twilio made all the sense in the world. How has Paradox benefited from partnering with Twilio? As a Twilio Ventures company, we have been able to get access to executives, understand future initiatives, and participate in the strategic direction of Twilio that’s important to us. We fully believe that great partnerships start with the shared goal of doing right by the client and serving their needs first. Twilio has shared in that philosophy and our partnership allows us to do more to further that mission together. Thank you for reading. If you would like to read more Twilio Startup Labs Founder Spotlight interviews and learn how others build, prototype, and demo on Twilio, please check out the series of articles here. For questions about Twilio Startup Labs or to learn how to get involved, please contact Frank Y. Owusu at fyawowusu@twilio.com. To find out more about how Twilio supports Startups, check out Grow with Twilio.
Company Name: Termii Inc Founder Name(s): Gbolade Emmanuel LinkedIn: Termii Company Page Industry: CPaaS, Communications Platform as a Service Stage/Funding: Termii has raised $3.65m funding to fuel customer engagement innovation in Africa. Location: San Jose, Lagos & Abidjan This is a post in a series of interviews with startup founders who have worked with Twilio Startup Labs, a program for developers at early-stage startups to learn how to build, prototype, and demo on Twilio. We had the pleasure of working with Gbolade Emmanuel on their startup, Termii. Describe your company (Startup) journey in 160 characters or less (the original length of a text message) Termii: Nigerian startup founded in 2017, offers messaging and communication solutions for businesses in Africa. Why did you start your company? Termii was founded to address the need for effective communication solutions in Africa and facilitate business growth in the region. How are you building on Twilio? E.g., how do you use email, SMS, WhatsApp, Verify, etc. to communicate with your customers? Termii has leveraged Twilio's API to send SMS messages to customers in the US, whether for marketing campaigns, notifications, or tokens. This would enable businesses using Termii to reach customers via text messages. What has the benefit or commercial impact been for your business since using Twilio? By leveraging Twilio SMS services, Termii has recorded Improved Customer Engagement which has translated into increased sales and brand awareness. Another benefit has been the Global Reach that Twilio offers that has enabled businesses like Termii to expand their reach across the shores of Africa and serve a wider customer base By building on Twilio, what has the impact been for your customers? By building on Twilio, Termii has empowered our customers with reliable and scalable messaging solutions, enhancing their communication and customer engagement, ultimately driving business success. Have you enjoyed Twilio Startup Labs? And if so, why? Termii has enjoyed Twilio Startup Labs as it has given us access to Twilio's Communication Platform which is valuable for startups like ours looking to build messaging, voice, and video capabilities into our applications. Do you have a recording of your DEMO and how you are using Twilio that you would like to share below? What is the best piece of advice you would give to founders who are looking to build on Twilio? Founders should focus on creating a clear and efficient messaging strategy using Twilio, tailor it to their target audience, and continually adapt to evolving customer preferences and technology trends for long-term success. What excites you most about being a founder? Being a founder at Termii is exciting because it provides the opportunity to create something significant, contribute to the betterment of society, and be a part of a dynamic and ever-evolving journey. It's a challenging and rewarding experience that I am passionate about. Thank you for reading. If you would like to read more Twilio Startup Labs Founder Spotlight interviews and learn how others build, prototype, and demo on Twilio, please check out the series of articles here. For questions about Twilio Startup Labs or to learn how to get involved, please contact Frank Y. Owusu at fyawowusu@twilio.com. To find out more about how Twilio supports Startups, check out Grow with Twilio.
Company Name: Postscript.io Founder Name(s): Adam Turner, Colin Turner, and Alex Beller LinkedIn: Postcript.io Company Page Industry: Talent Acquisition Technology Stage/Funding: Series C, $65 million USD Location: Fully Remote This is a post in a series of interviews with startup founders who have worked with Twilio Ventures. Twilio Ventures invests in companies building tomorrow’s solutions for developers and customer engagement applications. We had the pleasure of working with Adam Turner, Colin Turner, and Alex Beller on their startup, Postcript.io. Describe Postscript’s startup journey in 160 characters or less (the original length of a text message) Postscript serves 12,000+ Shopify Merchants by making SMS their #1 revenue channel. 230+ folks, fully remote, delivering best in class service and strategy. What does Postscript mean and/or how did it originate? We chose the name Postscript for a few reasons: A “PS” at the end of a note or letter is friendly, relatable & casual. A Postscript is something you might add in a letter to a friend. We wanted our communications platform to have that warm, friendly feel, which our branding matches if you go to our site. We’re in marketing automation, so we thought that the “scripting” part was nice Abbreviating with “PS” is fun What have you enjoyed most about building Postscript? I’ve never learned so much so quickly. That pace of growth has been unrelenting, which has pushed the team to our limits in the best ways possible. We wouldn’t have it any other way! Most importantly, though, we’ve been able to make a tremendous impact for our customers. We’ve driven billions of dollars of e-commerce transactions that wouldn’t have been there otherwise. We’ve brought hundreds of millions of customers in deeper connection with brands that they love, how cool is that?! What is one decision you made building Postscript that you would make differently today? I would have hired more people earlier. There are benefits that compound with early employees, especially ones that are able to persevere through all the growth – this layer acts as a structural foundation for future hiring by becoming culture carriers, knowledge centres, recruiters, onboarding buddies, interviewing resources etc. Our second stage of hiring was much larger than our first, which created a relative imbalance because our foundation was smaller than it could have been. This caused more pain than it needed to, and if we had hired folks earlier we could have avoided some of that pain. What should potential customers know that is unique about Postscript? Our suite of products (SMS Marketing, SMS Sales, Fondue CashBack) will make you the most money out of any provider, hands down. Our products and expertise for list growth, segmentation, targeting, and personalization result in merchants collecting and monetizing subscribers at the highest possible rate; We also believe in the power of using SMS for 1:1 conversations. From being able to manually respond, flexibly automate responses or have a team of expert sales associates reply back, Postscript is making SMS more human; We have a strong compliance first approach to the channel and proactively adding features and tools as both legal and industry requirements change; We are all-in on SMS and deliver strategies and product innovations to help make SMS the #1 revenue channel for ecommerce merchants; By working closely with Twilio as a partner, and as our exclusive global customer engagement platform, we are able to work closely with the carriers to help shape the future of industry requirements balancing the needs of consumers with the complexities of support SHAFT adjacent verticals like alcohol. How are you building on Twilio? E.g. How do you use Email, SMS, WhatsApp, Verify etc. to communicate with your customers? We use Twilio's APIs for programmable SMS and conversation management to power our e-commerce SMS marketing platform for over 12,000 Shopify merchants. Twilio is Postscript’s exclusive global customer engagement platform. Why did you select Twilio Ventures as an investor? Twilio has been with us since I began developing the product on Day 1. It’s been incredibly rewarding to grow into a success story of theirs from developer to strategic partner. Twilio has seen our growth and vision for what the channel can be for merchants, and it feels great to be so aligned with a commercial partner in finding success for our customers. How has Postscript benefited from partnering with Twilio? Our close partnership with Twilio has helped Postscript unlock much higher throughput for our customers which is super important during peak sending times like BFCM. There are a lot of other benefits to us in being able to work closely with Twilio on larger topics like how the carriers view alcohol and help advocate on behalf of merchants. We are also able to resolve issues with deliverability a lot faster because of our extremely close relationship with Twilio. Screenshots Thank you for reading. If you would like to read more Twilio Startup Labs Founder Spotlight interviews and learn how others build, prototype, and demo on Twilio, please check out the series of articles here. For questions about Twilio Startup Labs or to learn how to get involved, please contact Frank Y. Owusu at fyawowusu@twilio.com. To find out more about how Twilio supports Startups, check out Grow with Twilio.
Fear not! In this guide, I'll break down APIs in straightforward terms to help you understand what they are, what they're used for, and how they benefit our everyday digital experiences. I’ll also discuss how APIs differ from SDKs, and – of course – explain the curl thing. Let’s dive in. What is an API? An API, or “Application Programming Interface” , essentially works like a messenger, carrying your request to its destination and resulting in some output. Many apps will use APIs to fetch data from a server, such as getting weather updates, retrieving news articles, or listing social media posts. Instead of each app creating its own weather forecasting system, they can use a weather service API to get reliable data. A great analogy for an API is to think of a waiter in a restaurant and how food is served to the customers. In a restaurant, you will have your menu which describes the food on offer, and a chef who cooks your food. However, you can’t just walk into the kitchen and ask for a plate of pasta! There must be a process in place that everyone must follow – in order to receive your food, you must communicate with your waiter using the menu to inform them what you are looking for. The waiter then passes on that information to the kitchen, which then prepares your food. Once complete, the waiter will bring your food to your table for you to enjoy. That concept of a waiter, works the very same as an “API”. An API (the waiter) works between you and the service you wish to communicate with (the kitchen), and delivers the data (food) to you. This way there is no need to expose sensitive data or create your own service. If a weather service has an API available, for example, you can interact directly with that service and receive real-time weather updates without developing your own weather tracking service. Just use what is already available. How great is that? Interested in learning more about APIs? Check out my colleague Craig Dennis’s awesome Introduction to API Video to learn more: APIs for Beginners 2023 - How to use an API How do you use an API? At a restaurant, a menu will display what food is on offer from the kitchen and what you can or cannot have. This works the very same for APIs – the menu for an API is what we call “documentation”. API documentation is a detailed guide or manual that explains how to use a given API. It's a collection of instructions, just like a “menu,” that tells you what ingredients (parameters) are used in your dishes to get the outcome (results) you want. The documentation will show you what type of requests (orders) are accepted and how to issue them to get specific information. Depending on the size and history of the service, and how well it has been developed, the level or quality of documentation can vary. Some organizations may lack adequate documentation, causing challenges for developers, while others prioritize excellent documentation, valuing their developers' experiences more. At Twilio, we try to always deliver great customer experiences and make consistent conscious efforts to improve our documentation. We hope you find our own API documentation resourceful and helpful – check out the Twilio Docs, and let us know what you think. HTTP request methods When it comes to making an API request, there are some methods which you should be familiar with. These methods have an accepted usage across APIs – but I’ll warn you, always check the API documentation to verify they’ll work the way you expect. HTTP (Hypertext Transfer Protocol) methods are essential components of the HTTP protocol, defining the actions that can be performed on a given resource (e.g., a web page, a document, or data) located on a web server. These methods determine the type of interaction between a client (e.g., a web browser, customer) and a server (a computer which hosts the website), allowing for various operations to be carried out. Even when you access a website from your browser, behind the scenes HTTP methods are being used. When making an API request, you will need to select a method that best suits what you are trying to do. Each HTTP method serves a distinct purpose and has specific characteristics. Below I will touch on four commonly used HTTP methods: GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data. If you wish to get some data from an API, GET is the method you must include in your request. POST The POST method submits an entity to the specified resource, often causing a change in state or side effects on the server. If you wish to POST or send some data, this is the method you must include in your request. PUT The PUT method replaces all current representations of the target resource with the request payload. If you want to replace some data with data in your request, this is the method you would use. DELETE The DELETE method deletes the specified resource. If you wish to delete some data through an API, the DELETE method should be used in your request. Making your first request Now that you have a good understanding of APIs, let's talk about making your first API request. There are many excellent ways to do this and it all boils down to personal preference so what you use is entirely up to you. I’m going to give you a brief tutorial on how to use curl, a command line client which can make API requests. If you are more comfortable using software with a UI, there are software applications such as Postman available. If that is something you are interested in, check out our blog post How to use Postman to make requests and test APIs. For the purposes of this blog post, we will only be focusing on curl requests through your computer's terminal. What is curl ? curl, which stands for "Client for URLs," is a widely used open-source command-line tool and library for making API HTTP requests and interacting with various protocols. It’s been around since 1998, and you can find its homepage here. curl is designed to facilitate data transfer between servers and clients, making it a powerful tool for developers, system administrators, and individuals working with web services. You can install it via your own command line in a few steps, as seen below. Installation Installing curl will vary a bit, depending on your computer platform. I’ll give you an overview of installation steps for Windows and Macintosh users. Windows Using curl for Windows: Download the curl executable for Windows from the official curl website: curl for Windows. Choose the appropriate version (either 32-bit or 64-bit) based on your system architecture. Extract the downloaded ZIP file to a location of your choice (e.g., C:\curl). Add the curl executable path to your system's PATH environment variable: Right-click on This PC or My Computer and select Properties. Click on Advanced system settings on the left sidebar. Click on the Environment Variables button. Under System variables, scroll down and find the Path variable, then click Edit. Click New and add the path to the curl executable (e.g., C:\curl\bin). Open a new command prompt (cmd) and type curl to verify the installation. macOS: Using Homebrew (preferred method for macOS): Open Terminal, which you can find in the Utilities folder within the Applications folder, or use Spotlight search (Command + Space, then type "Terminal"). Install Homebrew (if not already installed) by running the following command in Terminal: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" After Homebrew is installed, you can install curl using the following command: brew install curl To verify the installation, type curl --version in Terminal. If successful, you should see your curl version number displayed. For example, when I tested for this article I saw curl 8.4.0 (x86_64-apple-darwin21.0). Send your first SMS message Now you should have curl successfully installed on your Windows or macOS system. You can choose quite a few things to do for your introduction to APIs – but I’m going to show you how to use Twilio’s Programmable Messaging API to send an SMS. We’ll start using it from the command line to hit the Twilio API and send our first SMS message. If you haven't done so already, you will need to purchase a Twilio phone number to send SMS from inside the Twilio console. You’ll then need to follow any additional registration steps for the number, depending on where the number originates and where you live. Log in to the Twilio Console, select Phone Numbers, and then click on the red plus sign to buy a Twilio number. Note that if you are using a free account you will be using your trial credit for this purchase. In the “Buy a Number” page, select your country and check SMS in the capabilities field. If you’d like to request a number from your region, you can enter your area code in the “Number” field. Please note, countries and territories differ in their registration requirements for numbers. For example, 10 digit US numbers require a form of registration for A2P 10DLC before usage. This is the standard the United States messaging ecosystem has put in place to ensure that SMS traffic to US end-users through long code phone numbers is verified and consensual. Before you are able to use your number, you’ll have to fulfill any regulatory or registration requirements for the number you purchase. Click the Search button to see what numbers are available, and then click Buy for the number that you like from the results. After you confirm your purchase, click the Close button. To be able to send an SMS via Twilio, you will need to wait for any necessary registration steps to be complete. Once that is done, you need to have access to your Twilio account credentials to authenticate the API request. Likewise, to send an SMS you need to have two phone numbers: the number of the sender, which is the number you bought in the previous section, and the number of the recipient, which can be your personal number. The Twilio credentials that you need are your “Account SID” and your “Auth Token”. You can find both on the dashboard of the Twilio Console: Once you have this information, you are ready to go with your first API request. Send an SMS with curl Remember our analogy of our restaurant ? Well let's now take a look at the equivalent of our restaurant’s menu which we can find here: Messaging Resource Documentation When you open the link above, you will be presented with quite a lot of information surrounding our Messaging Resource. One the right of the page you will see some code snippets and may notice various programming languages listed: Node.JS, Python, C#, etc. These may be a little daunting at first, but you can ignore these for now as we are only focusing on curl for the purpose of this blog post. Make sure curl is selected here, and you will be presented with the appropriate template syntax for a request to send a message. Click the copy icon shown in the image above (in the code box in the upper right hand corner) to copy the entire code snippet and paste it into a text editor of your choice. You can also copy and paste the code snippet seen below the language tabs. Here’s the example curl script: curl -XPOST https://api.twilio.com/2010-04-01/Accounts/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Messages.json \ --data-urlencode "To=+13105567965" \ --data-urlencode "Body=Hello from Twilio!" \ --data-urlencode "From=+12125551114" \-u 'ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:your_auth_token' This example sends an outbound message from the sender (212) 555-1114 (+12125551114) to the recipient at (310) 556-7965 (+13105567965), and includes the following message: Hello from Twilio! To make this script work for you, make the following updates, and then paste it into a terminal window: Line 1 update with your Account SID Line 2 update with a valid destination Line 3 update with the desired message text Line 4 update with a valid sender number Line 5 update with your Account SID and Auth Token Once you have hit enter in your terminal, a text message will be sent to your destination phone number you supplied in your request above. Additional optional parameters can be added in your request for requesting status change updates, adding a price limit, and more. For full details on each of the available options, and code samples from our Helper Libraries, please see Message Parameters (Twilio Docs). API vs SDK Now that you have a basic understanding of an API, you may ask yourself: what is the difference between APIs and SDKs? In software development, there is a lot of discussion around SDKs, or Software Development Kits, so it’s important you are able to differentiate between the two. When you want to start developing an application, there are many packages you may need to install to perform specific tasks within your application. One of these, for example, could be connecting to Twilio to send a message. If you built an application to interact with Twilio, the last thing you want is to manually write API requests for each action you wish to carry out (like we did above!). Manual API requests are fine to get data quickly or test an API, but rarely the optimal approach when it comes to a fully built, production application. Therefore, a more automated API approach which abstracts away the details of the API and makes it easier to program in your choice of language would be best. And you guessed it – this is where an SDK comes in. Here at Twilio, we differentiate between our available SDKs: Helper Library: A library used to develop server side/web applications Client Side SDK: SDK used to develop mobile applications (iOS/ Android) An SDK is essentially a collection of packages of pre-written code resources that you can install within your application. Rather than re-writing code to carry out various aspects of your application, you can install the SDK in the language you need, then use the methods built into the SDK to carry out these tasks. Why reinvent the wheel, right? As an example, imagine you want to build a treehouse. To do that, you need various tools such as a hammer, nails, a saw, and so on. An SDK is like a special kit of tools designed for a specific task, such as creating mobile apps, games, or other software. This kit includes things like code libraries, pre-written pieces of code, and documentation that make it easier for developers to create software for a particular platform or device. Instead of starting from scratch, you can use the tools in the SDK to save time and effort. So, an SDK is like a handy set of tools that simplifies the process of building software, just like a toolbox makes it easier to build a treehouse. Twilio’s Software Development Kits When it comes to Twilio, we have a vast range of SDK’s available. Whether you want to introduce Voice calling to your application, or you wish to have a 1:1 Video conversation with a doctor, Twilio has you covered with SDKs that help you develop those applications quicker than you can directly with our APIs. You can check out all our SDKs available in our product documentation. I have also included a list of our Helper Libraries which we have for messaging, in case these also interest you. Feel free to choose a language and dive right in: C# with .NET Framework C# with .NET Core Java Node.js PHP Python Ruby Go We can’t wait to see what you build David Dooley is a Team Lead for the EMEA Developer Support Team at Twilio based out of Dublin. He’s very passionate about helping our customers achieve success with Twilio and loves to work on complex problems. He also enjoys educating others, hence this blog post. In his spare time he likes to dabble in crypto and is a strong believer in blockchain technology. He can be reached at ddooley [at] twilio.com.
This is the year AI became ubiquitous. I interact with OpenAI's ChatGPT daily, seeking different sorts of help but asking a lot just out of curiosity. However, in certain situations – walking, driving a car, having no or poor cell service – accessing ChatGPT is impossible. I feel like there should be another way of interacting with OpenAI's GPT models! And that is how I came up with the idea of calling a phone number to talk with an AI. It took me fewer than two hours and about 50 lines of code to build a working solution for interacting with GPT models using a voice call with Twilio. Want to see how I did it? Continue reading. Tutorial prerequisites Before we can start building, you need to ensure you have Twilio and OpenAI accounts. You will need: A Free Twilio Account. You can sign up through this link. An OpenAI account and API key. How the solution will work You will be leveraging Twilio Studio, our drag-and-drop visual application builder, to handle the call state and serverless Twilio Functions to send our prompts to OpenAI's API. Here is our high-level plan: The call will come in on a Twilio number The number will be configured to execute a Studio Flow The Studio Flow will greet the caller and ask for the prompt Once the caller has completed saying the prompt the Studio Flow will invoke a Twilio Function providing the transcript of the caller's prompt The function will make a HTTP request to the OpenAI API and return the result to the Studio Flow The Studio Flow will say the text response to the caller and wait for another prompt Below is a screenshot of a Studio Flow I've built. The exported JSON of the Flow you will find on GitHub – you can use it to import the Flow in your Twilio Console. Let's build it together Developer Environment Setup Before you continue, let's make sure you have the software you need: node.js and npm Twilio CLI Twilio Serverless Toolkit I will use Typescript for this tutorial, but it should work just as well with JavaScript. Clone the project The code of the function is available on GitHub here: https://github.com/kuschanton/call-gpt Configure your environment In order to run the code, you'll need environment variables to be in place. Go to .env-example in the root folder of the project, then copy the file and rename it to .env . Update the file according to the instructions in the comments above each environment variable in that file. Deploy functions The next step is to deploy our code to Twilio Functions. To do this, execute the following command in the root of your project: npm run deploy Once deployment is complete, you will see the URL of your newly created function. It will look like the following: Functions: [protected] https://call-gpt-0000-dev.twil.io/prompt Copy the URL – we are ready to move on to the next step. If you have problems deploying your code, check that ACCOUNT_SID and AUTH_TOKEN environment variables are properly configured in your .env file. Studio Flow As mentioned before, we will use Studio to manage our call instance. First, you will have to import the JSON file into Studio to have your Flow ready. To do this: Open your Twilio Console and navigate to Studio, then to Flows Click the Create new Flow button Give your Flow a name and in the New Flow modal select Import from JSON in the bottom left corner Paste the Flow JSON from the GitHub repository and click Next In the newly created Flow locate the http_prompt widget and click on it. In the widget configuration update the Request URL field with the URL you copied in the previous step, then click Save Don't forget to click the Publish button at the top to publish your Flow Buy a number To buy a number from the Twilio Console click Phone Numbers > Manage > Buy a number. Select a voice capable number from the list and click Buy. You can refer to a support article here for more details. Configure the number Once you have the number, the only step left is to configure it. For this, click the Configure button in the Confirmation dialog or navigate to your Active Number list and click on the number you'd like to use for this project. On the Number Configuration page, scroll to Voice Configuration, and in the A call comes in dropdown, select Studio Flow. Select your flow in the list on the left. Save the configuration, and that's it! Run and Test To test our solution, call the phone number. After the greeting, you should be able to say prompt and hear the answer from GPT. Conclusion Using Twilio Studio, Functions and OpenAI's API we were able to spin up a small service that allows to call a phone number and talk with GPT. The full code for the project is available on GitHub at https://github.com/kuschanton/call-gpt. Like what you built? See some of our other Functions and serverless tutorials on the blog. Anton Kushch is a Principal Solutions Engineer at Twilio. He is helping companies in EMEA build powerful customer engagement solutions powered by Twilio. He can be reached at akushch [at] twilio.com, or you can collaborate with him on GitHub at https://github.com/kuschanton.
As I’ve discussed during this email microservice series, Twilio SendGrid is an extremely powerful email platform – but it does require some building. In the first two posts in the series, I showed you a microservices architecture to handle outgoing API calls to Twilio SendGrid and how to handle SendGrid Event Webhooks. In this post, I’ll bring it all home and show you how to handle SendGrid’s Inbound Parse Webhook. Like the other two posts, I use SAM (Serverless Application Model) templates so if you are familiar with AWS, you should not have any problem getting this microservice running in your account. GCP and Azure users can take the patterns (and even code) in this template and translate them into equivalent products in their respective platforms. Be sure to check out the entire Three Part SendGrid Microservices Blog Series: Serverless Microservice for Sending Emails using SendGrid Email API Serverless Microservice for handling SendGrid Event Webhooks Serverless Microservice for handling SendGrid Inbound Parse All three of these blog posts have three different components. This page walks you through the entire solution. The companion video actually talks you through the solution, how to spin it up, and how to try it out. Lastly, the actual code is heavily commented with tons of details, configuration options, and best practices. Part 3: Serverless Microservice for handling SendGrid Inbound Parse Let’s get started! Why should I care about SendGrid Inbound Parse? Inbound Parse is typically used for two key purposes: Compliance: Many organizations need to keep a copy of all of their outbound emails for compliance reasons. Using Inbound Parse along with a BCC in all outgoing mail can be a viable solution. Interactivity: Enterprise email is often 1-way, but customers often expect 2-way capabilities. How should you handle a customer reply to an order or support email? With Inbound Parse you can build a system to programmatically handle inbound email and expand how you interact with your customers via email. Learning by video can be effective. You can watch the video before proceeding or come back to it later! Here is what we are going to spin up: Let’s walk through the flow starting from the top right. A. SendGrid Receives an inbound email The first step to using SendGrid Inbound Parse is to configure your MX records for the desired subdomain or domain to point to SendGrid. That configuration is not a part of this blog post, but you can read about it here. With MX records in place, inbound emails to a domain or subdomain will be routed to SendGrid and then forwarded to an endpoint of your choosing. When SendGrid Inbound Parse receives an inbound email, it forwards the email to your endpoint. SendGrid keeps track of the quantity of inbound emails it forwards but no other metrics about the actual content of these messages. B. Receive the Inbound Email from SendGrid Inbound Emails can have spiky demand. You need scalable systems in place to be able to handle incoming requests and place them in a queue for additional processing. This template offers two approaches to handling incoming events from SendGrid. Events go from API Gateway directly into an SQS Queue. Events go from API Gateway to a Lambda function which dumps the request into an S3 bucket. An S3 bucket triggers an event which feeds into the SQS Queue. Which approach should you use? Approach A is the simplest way, but because SQS has a maximum payload size, this option would not work well with larger emails. You can search for an “average size of an email” and consider the size of emails that you expect to receive when you consider this option. Approach A would work well with small, text only emails. Approach B will be best for larger emails AND emails that will include attachments. Emails are dumped directly into an S3 bucket (with any attachments) so there is no concern about maximum SQS payload size. The SQS queue is still utilized to control flow to downstream processors. C. Process Inbound Email All inbound emails must first pass Basic Authentication. All emails passing the Basic Auth test are decoded and parsed and then saved to an S3 Bucket. In addition, basic details on each email are published to an SNS Topic for additional processing. Examples of additional processing could include: Update a CDP, CRM, or any other data store. Inspect and route emails to an application for additional handling (support, chat, inquiry, etc.). Inspect emails for errors or preference changes (unsubscribe, bounce, spam). Whatever else you want to do with inbound emails! If you have read this far, then I think you are ready to see how you can spin this up yourself! Let’s get started... AWS Resources All of the AWS components of the serverless application are provided as "Infrastructure as Code" (oft-shortened to IaC) and deployed via CloudFormation into AWS. Here is an overview of the components: AWS SAM => an open-source framework that enables you to build serverless applications on AWS AWS CLI => a command line interface to AWS services. Not required, but recommended because it simplifies credentials when deploying AWS Lambda => serverless compute service S3 => Object Storage SQS => Simple Queue Service SNS => Simple Notification Service Prerequisites This is not a beginner level build! You should have some knowledge of AWS, serverless computing, and programming. If you have already completed part 1 or part 2 of this series, then you should already have these in place! Twilio SendGrid Account. If you don’t yet have one, you can sign up for a free account here. An authenticated domain and a verified sender in your SendGrid account. AWS Account with permissions to provision Lambdas, step functions, S3 buckets, IAM Roles & Policies, an SQS queue, and SNS topics. You can sign up for an account here. AWS CLI installed with AWS credentials configured. AWS SAM CLI installed. Node.js installed on your computer. Let’s Build it! Here are the basic steps of our build today. Download the code Deploy the stack Set up an Event Webhook from your SendGrid Account Try it out! 1. Download the Code for this Application Download the code from this repo and then open up the folder in your preferred development environment. Note that this repo has an “aws_stacks” folder which contains 2 separate microservices. SendGridInboundParseHandler is the primary stack and must be installed first. The other stack (GenericInboundHandler) shows how you can add on additional processing of inbound emails. Open up a terminal window and go into the primary stack aws_stacks/SendGridInboundParseHandler. First we need to install a SendGrid node package to help us validate requests. From that parent directory, cd into the primary stack directory and then do the following: $ cd layers/layer-parse-multipart-data/nodejs $ npm install $ cd ../../.. The command sequence should look like this: Now we are ready to enter our SendGrid credentials... 2. Enter your credentials for Basic Authentication Open up the file template.yaml in the parent “SendGridInboundParseHandler” directory. This yaml file contains the instructions needed to provision the AWS resources. To secure our endpoint we will use Basic Authentication. In the template.yaml file, use FIND and search for INBOUND_PARSE_USER. Set your own username and password combination, and note that for anything other than your initial testing, be sure to save credentials using more secure means like AWS Parameter Store or Secrets Manager. These credentials exist in two places so be sure to find both occurrences and set your username and password in each. 3. Deploy Code With those settings in place, we are ready to deploy! From a terminal window, go into the parent (aws_stacks/SendGridInboundParseHandler) directory and run: $ sam build This command goes through the yaml file template.yaml and prepares the stack to be deployed. In order to deploy the SAM application, you need to be sure that you have the proper AWS credentials configured. Having the AWS CLI also installed makes it easier, but here are some instructions. Once you have authenticated into your AWS account, you can run: $ sam deploy --guided --stack-name SendGridInboundParseHandler This will start an interactive command prompt session to set basic configurations and then deploy all of your resources via a stack in CloudFormation. Here are the answers to enter after running that command (except, substitute your AWS Region of choice – be sure to use the same region as step 2 above!): Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: SendGridInboundParseHandler AWS Region [us-east-1]: <ENTER-YOUR-AWS-REGION-OF-CHOICE> Parameter RawEventsBucketName []: <some-unique-id>-raw-inbound-email-bucket Parameter QueueName []: <some-unique-id>-sendgrid-inbound-email-queue #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: y #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: y Disable rollback [y/N]: N SendGridEventToS3Function has no authentication. Is this okay? [y/N]: y Save arguments to configuration file [Y/n]: y SAM configuration file [samconfig.toml]: SAM configuration environment [default]: After answering the last questions, SAM will create a changeset that lists all of the resources that will be deployed. Answer “y” to the last question to have AWS actually start to create the resources. The SAM command prompt will let you know when it has finished deploying all of the resources. You can then go to your AWS Console and CloudFormation and browse through the new stack you just created. All of the Lambdas, Lambda Layers, S3 buckets, IAM Roles, SQS queues, SNS topics are all created automatically. (IaC – Infrastructure as Code – is awesome!) Also note that the first time you run the deploy command, SAM will create a samconfig.toml file to save your answers for subsequent deployments. After you deploy the first time, you can drop the --guided parameter of sam deploy for future deployments. The template only has one simple “downstream” processor to install. You could expand this processor to inspect inbound emails and take additional action (for example: update databases, update CDP/CRMs, trigger other events, or send reply emails). Open a terminal window from the parent directory and enter the following: $ cd aws_stacks/GenericInboundEmailHandler $ sam build $ sam deploy --guided --stack-name GenericInboundEmailHandler (accept the defaults) Once that completes, continue to the next step. 4. Set up Inbound Parse from your SendGrid Account Now we are ready to tell SendGrid where to send inbound emails. In the AWS Console, go to CloudFormation and then select the Stack you just created (SendGridInboundParseHandler). Click on the OUTPUTS tab to reveal key::value pairs generated by this stack. We are looking for these two key::value pairs: SGInboundEmailToS3Api SGInboundEmailToSQSApi These two options correspond to the two types of endpoints as described in top introduction section B earlier in this blog. To recap, one endpoint takes requests and dumps them into an S3 bucket while the other dumps the requests into an SQS queue. You can review top introduction section B above for more details but you can use either or even both options. For this blog, we will proceed with SGInboundEmailToS3Api. The value for that key should look something like this: https://9nnnnnnn.execute-api.us-west-2.amazonaws.com/ To add in Basic Authentication, we add the username and password combination entered in the template.yaml file in section 2 above, to the beginning of the url. You would of course use your own user/password but the resulting url should look like this: https://parseUser:pass123@9nnnnnnn.execute-api.us-west-2.amazonaws.com/ These endpoints require specific paths added to the base url (SGInboundEmailToS3Api and SGInboundEmailToSQSApi both use “/sendgrid-inbound”). So our final URL using using the SGInboundEmailToS3Api key would be: https://parseUser:pass123@9nnnnnnn.execute-api.us-west-2.amazonaws.com/sendgrid-inbound Now we return to the SendGrid Console and go to SETTINGS => INBOUND PARSE. Click on the button that says ADD HOST & URL. Fill out the screen below: You would enter your own subdomain and your verified domain. The URL you configured above goes in the Destination URL field. You can optionally check the Check incoming emails for spam checkbox to receive a spam score for each inbound email. Leave the POST the raw, full MIME message checkbox unchecked because the template uses the email data sent as multipart. Click the ADD button and we are ready to go! 6. Try it out... Now the fun part! Simply send an email to your parse address and watch the magic happen! I will send a simple email with an attachment of my dog. Now I will open up S3 in the AWS Console and go into the bucket that starts with twilio-sendgrid-inbound-parse-. Each inbound email gets its own unique id and a “folder” in S3. Here is the email I just sent: ...and when you go into that folder you will always see an email.json file with all of the text key / value pairs and also any attachments. For the email I just sent I see this: ...clicking into email.json reveals: ...all of the text data from the email is included in the JSON file. Clicking on the attachment (tongue.png) reveals: At this point you have a solution to save emails to your own AWS S3 Bucket. We are often asked how we recommend keeping a copy of emails sent out via SendGrid. You could use this solution to save a copy of all outbound emails by including a BCC to your parse domain in your outbound emails (note: BCCs do count against your email credits!). All of the emails would then be saved for archival purposes. You could even attach a S3 Lifecycle Policy to your S3 bucket to automatically delete files once they reach a set age. Saving emails for archival purposes is terrific, but you also can act on these events for all sorts of use cases. In the AWS Console, go to Lambda, and then open the Lambda that starts with GenericInboundEmailHandle-. This is just a simple Lambda function that writes the inbound event to logs. This is a stub for you to build whatever functionality you need: Update your CDP, CRM or other data sources. Handle a sales inquiry. Route or reply to a customer support request. Update a dashboard. Trigger other events. Receive images or files programmatically. Build an email-based “chatbot”. Whatever you can think of! You can look at the code of the function and see that it is pretty simple as it just sends the incoming event to console.log. Click on the Monitor tab and then click View CloudWatch logs. This will take you to CloudWatch and let you see the output from the Lambda function. Again, this function is just a stub so the logs in CloudWatch will just show the console.logs as shown below. It is up to you to handle these inbound email events however you need to! The data sent to the downstream processors via SNS is a subset of all of the data included in the email. For example: With that data, you can make updates or trigger additional events. If you need the full email contents (or the attachments) you can use the messageId to easily retrieve the email.json file from the S3 Bucket. Cleanup To avoid any undesired costs, you can delete the application that you created using the AWS CLI and the console. First, delete the S3 buckets. From the S3 “home” page, first empty the buckets by selecting the bucket and then select the EMPTY option. Next, delete the actual bucket by selecting the bucket and then the DELETE button. Next, delete the stack from CloudFormation in the AWS console. Select the DELETE STACK option. AWS SAM also creates a stack and an S3 bucket to manage deployments. You can delete them from CloudFormation in the AWS console following the same procedure as above. Deploy to production While you can get this system working pretty quickly, it is not ready for your production environment. Be sure to customize and refine this codebase for your needs, thoroughly test your version of this system in your environment, and configure tests, error handling, and anything else required by your development standards. Conclusion In short order you have the foundation for a serverless microservice for handling email events from Twilio SendGrid Webhook Events! The code is heavily-commented. Dive in and see how you can configure your enterprise systems to leverage this event-based microservice AND how you can use SendGrid’s robust tracking capabilities to get a much better sense of what you are sending and how your users are engaging with your emails. Speaking of engaging with your emails, be sure to check out part 1 and part 2 of this blog series to learn about SendGrid Webhook Events AND Inbound Parse! *** Dan Bartlett has been building web applications since the first dotcom wave. The core principles from those days remain the same but these days you can build cooler things faster. He can be reached at dbartlett [at] twilio.com.
The first part of this blog series showed how enterprises could build a microservice to send outbound emails in a centralized and scalable way. In this second post on AWS microservices, we will handle incoming SendGrid Event Webhooks. These Event Webhooks are feedback from the engagements generated by outbound emails sent via the SendGrid Email API. This post – and yes, all three blog posts – use SAM (Serverless Application Model) templates. Basic familiarity with AWS is all that is required to spin these up. GCP and Azure users should benefit from these templates too, though, as they should chart an effective course for building similar functionality in those platforms. Be sure to check out the entire three part SendGrid Microservices Blog Series: Serverless Microservice for Sending Emails using SendGrid Email API Serverless Microservice for handling SendGrid Event Webhooks Serverless Microservice for handling SendGrid Inbound Parse There are three distinct parts to this solution: the blog, the companion video, and heavily-commented code. Each part offers something different so be sure to review all three. Part 2: Serverless Microservice for handling SendGrid Event Webhooks Let’s get started! Why should I care about SendGrid Event Webhooks? We all agree that being able to send emails to your customers is valuable, but it is not enough. Did the email actually get delivered? Did your customer open it and click on any links? Does your customer not want to receive emails from you? SendGrid Event Wehbooks allow you to get real time answers to these questions. This critical customer engagement feedback is routed back to your systems so that you can incorporate it into your customer profiles as well as your monitoring and analytics. Learning by video can be effective. You can watch the video before proceeding or come back to it later! Here is what we are going to spin up in part two of the series: Let’s walk through the flow starting from the top left. A. SendGrid Sends Emails and Events are Generated After SendGrid receives an API request to send an email, it quickly processes the request payload and delivers the email(s) to the Inbox Providers (GMail, Outlook, Yahoo, etc.) who, in turn, deliver them to the email End Users. Through the journey of those emails, events are generated by the Inbox Providers (delivered, bounced, etc.) and the email End User (open, click, etc.) and sent back to SendGrid. As a SendGrid customer, you are able to consume these events and have the data sent back to your systems for processing via Webhooks. Note that Event Webhooks do have retry logic! B. Receive the Event Batch from SendGrid Event Webhooks can have spiky demand. You need scalable systems in place to be able to handle incoming requests and place them in a queue for additional processing. This template offers two approaches to handling incoming events from SendGrid. Events go from API Gateway directly into an SQS Queue. Events go from API Gateway to a Lambda function which dumps the requests into an S3 bucket. The S3 bucket triggers an event which feeds into the SQS Queue. Which approach should you use? First, you should know that SendGrid batches events as it receives them and sends them to your endpoint at regular intervals or when batches get too large. Approach A is the simplest, but because SQS has a maximum payload size, this option could not be effective for heavy / spiky volume. Approach B is likely best for most cases. The batches from SendGrid are dumped directly into an S3 bucket so there is no concern about maximum payload size. The SQS queue is still utilized to control flow to downstream processors. C. Process Event Batch A Lambda function is attached to the SQS Queue. You have the ability to control the rate in which messages are consumed from the queue. The first order of business for this Lambda is to validate that each request actually came from SendGrid. This is done by taking request headers x-twilio-email-event-webhook-signature and x-twilio-email-event-webhook-timestamp and validating them along with the key set in your SendGrid Console. All validated requests proceed while unvalidated requests are blocked and can trigger alarms or additional processing. The payload is then pulled from the SQS message or from S3 (depending on the A or B choice in section B above), and then the array of events from SendGrid are separated out and each event is published separately to the SNS topic for additional processing by downstream consumers. D. Additional Processing This template has 3 separate “processing” AWS stacks that you can spin up to handle events published to the SNS topic: A Lambda function that saves each event to an S3 bucket. A Lambda function that saves each event to DynamoDB. A Lambda function that is a simple stub for you to handle however you need! If you have read this far, then I think you are ready to see how you can spin this up yourself! Let’s get started... AWS Resources All of the AWS components of the serverless application are provided as "Infrastructure as Code" (oft-shortened to IaC) and deployed via CloudFormation into AWS. Here is an overview of the components: AWS SAM => an open-source framework that enables you to build serverless applications on AWS AWS CLI => a command line interface to AWS services. Not required, but recommended because it simplifies credentials when deploying AWS Lambda => serverless compute service S3 => Object Storage SQS => Simple Queue Service SNS => Simple Notification Service Prerequisites This is not a beginner level build! You should have some knowledge of AWS, serverless computing, and programming. If you have already completed part 1 of this series, then you should already have these in place! Twilio SendGrid Account. If you don’t yet have one, you can sign up for a free account here. An authenticated domain and a verified sender in your SendGrid account. AWS Account with permissions to provision Lambdas, step functions, S3 buckets, IAM Roles & Policies, an SQS queue, and SNS topics. You can sign up for an account here. AWS CLI installed with AWS credentials configured. AWS SAM CLI installed. Node.js installed on your computer. Let’s Build it! Here are the basic steps of our build today. Set up an Event Webhook from your SendGrid Account Download the code Deploy the stack Update the Webhook back in SendGrid Try it out! 1. Set up an Event Webhook from your SendGrid Account From your SendGrid Console, go to SETTING => MAIL SETTINGS => EVENT WEBHOOKS. Click on the button that says CREATE NEW WEBHOOK. The screen should look like this: Enter a Friendly Name of your choosing and then put in a fake Post URL (we will generate the real Post URL shortly) and then return here to update the Event Webhook. You can select which events you wish to receive. For simplicity I am just going to receive Opened and Delivered events but there are many events that you are able to track. Note that paid Email API plans can have multiple Event Webhooks! Why might you have different Event Webhooks? Some enterprises may want to send different types of events to different places. For example, user generated events like clicks and opens may go to one webhook while provider events like delivered, bounced, deferred, may go to a different webhook. Next, scroll down the page and enable SIGNATURE VERIFICATION. Save the Event Webhook and then click on EDIT to return to this view and note the new VERIFICATION KEY. Copy this as you will need to enter it shortly. 2. Download the Code for this Application Download the code from this repo and then open up the folder in your preferred development environment. Note that this repo has an aws_stacks folder which contains 4 separate microservices. SendGridEventWebhookHandler is the primary stack and must be installed first. The other 3 stacks (SaveEventWebhookToS3, SaveEventWebhookToDynamo, and GenericSendGridEventHandler) consume events that come in. You can install one of them or all of them. Open up a terminal window and go into the primary stack aws_stacks/SendGridEventWebhookHandler. First we need to install a SendGrid node package to help us validate incoming requests. From that parent directory, cd into these two directories and install the packages. Here are the commands: $ cd layers/layer-sendgrid-eventwebhook/nodejs $ npm install $ cd ../../.. The command sequence should look like this: Now we are ready to enter our SendGrid credentials... 3. Enter your SendGrid Account Credentials Open up the file template.yaml in the parent “SendGridEventWebhookHandler” directory. This yaml file contains the instructions needed to provision the AWS resources. As mentioned in step 1 above, we are going to use the key set in the Twilio Console to validate all requests coming into this endpoint. We can also use Basic Authentication. In the template.yaml file, use FIND and search for EVENT_WEBHOOK_USER. Set your own user / password combination and note that for anything other than your initial testing, be sure to save credentials using more secure means like AWS Parameter Store or Secrets Manager! We will add in this username and password combination along with the endpoint we generate in a subsequent step. Next, use FIND to search for SENDGRID_WEBHOOK_PUBLIC to bring up this block of yaml code: This code enables you to set up both types of endpoints. For the initial test use the first one (SENDGRID_WEBHOOK_PUBLIC_KEY_API_TO_S3) and enter the key that you created in step 1 above when you set up the Event Webhook in the SendGrid Console. 4. Deploy Code With those settings in place, we are ready to deploy! From a terminal window, go into the parent (aws_stacks/SendGridEventWebhookHandler) directory and run: $ sam build This command goes through the yaml file template.yaml and prepares the stack to be deployed. In order to deploy the SAM application, you need to be sure that you have the proper AWS credentials configured. Having the AWS CLI also installed makes it easier, but here are some instructions. Once you have authenticated into your AWS account, you can run: $ sam deploy --guided --stack-name SendGridEventWebhookHandler This will start an interactive command prompt session to set basic configurations and then deploy all of your resources via a stack in CloudFormation. Here are the answers to enter after running that command (except, substitute your AWS Region of choice): Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: SendGridEventWebhookHandler AWS Region [us-east-1]: <ENTER-YOUR-AWS-REGION-OF-CHOICE> Parameter RawEventsBucketName []: <some-unique-id>-raw-events-bucket Parameter QueueName []: <some-unique-id>-sendgrid-events-queue #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: y #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: y Disable rollback [y/N]: N SendGridEventToS3Function has no authentication. Is this okay? [y/N]: y Save arguments to configuration file [Y/n]: y SAM configuration file [samconfig.toml]: SAM configuration environment [default]: After answering the last questions, SAM will create a changeset that lists all of the resources that will be deployed. Answer “y” to the last question to have AWS actually start to create the resources. The SAM command prompt will let you know when it has finished deploying all of the resources. You can then go to your AWS Console and CloudFormation and browse through the new stack you just created. All of the Lambdas, Lambda Layers, S3 buckets, IAM Roles, SQS queues, SNS topics are all created automatically. (IaC – Infrastructure as Code – is awesome!) Also note that the first time you run the deploy command, SAM will create a samconfig.toml file to save your answers for subsequent deployments. After you deploy the first time, you can drop the --guided parameter of sam deploy for future deployments. 5. Update the Webhook back in SendGrid With the primary stack deployed, we can now update the SendGrid webhook with the newly created endpoint. In the AWS Console, go to CloudFormation and then select the Stack you just created (SendGridEventWebhookHandler). Click on the Outputs tab to reveal key::value pairs generated by this stack. We are looking for these two key::value pairs: SGEventWebhookToS3Api SGEventWebhookToSQSApi These two options correspond to the two types of endpoints as described in top introduction section B earlier in this blog post. To recap, one endpoint takes requests and dumps them into an S3 bucket while the other dumps the requests into an SQS queue. You can review top introduction section B above for more details but you can use either or even both options. For this blog, we will proceed with SGEventWebhookToS3Api. The value for that key should look something like this: https://9nnnnnnn.execute-api.us-west-2.amazonaws.com/ To add in Basic Authentication, we add the username and password combination entered in the template.yaml file in section 3 above to the beginning of the url. You would use your own username and password but the resulting url should look like this: https://eventWHUser:pass123@9nnnnnnn.execute-api.us-west-2.amazonaws.com/ These endpoints require specific paths added to the base url (SGEventWebhookToS3Api adds /sendgrid-events and SGEventWebhookToSQSApi adds /twilio-sendgrid). These paths are set in two different places. The path for SGEventWebhookToS3Api is set on the “Event” of the lambda function in the template.yaml file. The path for SGEventWebhookToSQSApi is set in the api.yaml file. So our final URL using using the SGEventWebhookToS3Api key would be: https://eventWHUser:pass123@9nnnnnnn.execute-api.us-west-2.amazonaws.com/sendgrid-events Now we return to the SendGrid Console and go to SETTING => MAIL SETTINGS => EVENT WEBHOOKS. Select the edit option for your Event Webhook and then paste your new endpoint into the Post URL field as shown below: Save your changes and you are now ready to receive email events from SendGrid! 6. But what do we do with the events? We can receive all of these events now, but we have not set up any processors. As discussed in the introduction section D above, this repo comes with three processors. You can choose which ones you want to spin up, but these instructions will show you how to quickly install all three. Open a terminal window and go into the aws_stacks/ folder and then enter the following: $ cd SaveEventWebhookToS3 $ sam build $ sam deploy --guided --stack-name SaveEventWebhookToS3 (accept the defaults) $ cd .. $ cd SaveEventWebhookToDynamo $ sam build $ sam deploy --guided --stack-name SaveEventWebhookToDynamo (accept the defaults) $ cd .. $ cd GenericSendGridEventHandler $ sam build $ cd ../ (accept the defaults) 6. Try it out... If you started with the first blog post in this series, you can use that to send an email. If not, send an email via your SendGrid account and you should start receiving events immediately and be able to with them in your processors. From the sample Email API payload below, this is what you should expect to see after sending an email via your SendGrid Email API: Processor: Save to S3 Bucket In your AWS Console, go to S3 and then go into the S3 Bucket that starts with “twilio-sendgrid-event-webhook...”. You should see a folder with a date in yyyy-mm-dd format. Open that folder and you will see additional subfolders for each event type. You can go into any of those directories and inspect the JSON files for each event. They will look like this: Note that “categories” and the custom arguments (customArgs) submitted with the request are carried through to the Event Webhook along with other expected key/value pairs so you have everything you need to tie this event back into your profile for this customer and into your analytics and data warehouse infrastructure. Processor: Save to S3 DynamoDB From your AWS Console, go to DynamoDB and then select “Tables” and then choose the table that starts with “SendGridEventWebhookDynamoDBTable...”. Then click on Explore Items or click the button that says Explore Table Items. You will then be able to browse through all of the events that have been added to this table. The template sets the primary key as the “to” email address and the sort key is a concatenation of the event description, a timestamp and the SendGrid event id. You can of course change this to be whatever you want, but this configuration will allow you to make some initial queries. An item will contain all of the other parameters included in the event: Processor: Generic Lambda Handler The last processor is just a simple Lambda function that just writes the inbound event to logs. This is a stub for you to build whatever functionality you need: Update your CDP, CRM, or other data sources. Handle a bounce or spam report. Handle an error. Update a dashboard. Trigger other events. Open up Lambda in your AWS Console and then click on the function that starts with GenericSendGridEventHandl-SendGridGenericHandlerFu.... You can look at the code of the function and see that it is pretty simple as it just sends the incoming event to console.log. Click on the MONITOR tab and then click VIEW CLOUDWATCH LOGS. This will take you to CloudWatch and let you see the output from the Lambda function. Again, this function is just a stub so the logs in CloudWatch will just show the console.log’s as shown below. It is up to you to handle these events however you need to! Cleanup To avoid any undesired costs, you can delete the application that you created using the AWS CLI and the console. First, delete the S3 buckets. From the S3 “home” page, first empty the buckets by selecting the bucket and then select the EMPTY option. Next, delete the actual bucket by selecting the bucket and then the DELETE button. Next, delete the stack from CloudFormation in the AWS console. Select the DELETE STACK option. AWS SAM also creates a stack and an S3 bucket to manage deployments. You can delete them from CloudFormation in the AWS console following the same procedure as above. Deploy to production While you can get this system working pretty quickly, it is not ready for your production environment. Be sure to customize and refine this codebase for your needs, thoroughly test your version of this system in your environment, and configure tests, error handling, and anything else required by your development standards. Conclusion In short order, you now have the foundation for a serverless microservice for handling email events from Twilio SendGrid Event Webhooks! The code in the repo is heavily-commented. Dive in and see how you can configure your enterprise systems to leverage this event-based microservice AND how you can use SendGrid’s robust tracking capabilities to get a better sense of what you are sending and how your users are engaging with your emails. Speaking of engaging with your emails, be sure to check out part 1 and part 3 of this blog series to learn about sending emails with SendGrid, as well as Inbound Parse! *** Dan Bartlett has been building web applications since the first dotcom wave. The core principles from those days remain the same but these days you can build cooler things faster. He can be reached at dbartlett [at] twilio.com.
Twilio SendGrid is an extremely powerful platform for sending and receiving emails, but it does require some building. When talking to customers about using the SendGrid Email API, I often talk about a “communications layer” or a “microservice” or some way to enable an organization to effectively use email at scale. In these conversations, I have said many times that “if I were building this, I would do it this way...” AWS is a leader in cloud computing and serverless technologies and event driven architectures can be especially attractive to email use cases where resources can be spun up and down on demand. In this series of three blog posts, I’ll use that approach heavily! These posts use SAM (Serverless Application Model) templates, so anyone with some familiarity with AWS should be able to spin them up in a few minutes. If you use GCP or Azure, these templates should give you some ideas on building something similar using corresponding resources in those platforms. Throughout everything, the blog posts and code make recommendations on best practices so you can get the most out of the SendGrid Email API. Three part SendGrid Microservices blog posts: Serverless Microservice for Sending Emails using SendGrid Email API Serverless Microservice for handling SendGrid Event Webhooks Serverless Microservice for handling SendGrid Inbound Parse Each blog has the content on the post, a companion video, and heavily-commented code! All three of these resources are valuable and provide different insights into the solutions. Be sure to check out all three. But for this post, I’ll show you a serverless microservice architecture to send emails using the SendGrid API. Part 1: Serverless Microservice for Sending Emails using SendGrid Email API Learning by video can be effective. You can watch the video before proceeding or come back to it later! Here is the architecture we are going to spin up: Let’s walk through the flow starting from the top left. A. Enterprise Applications You likely have multiple use cases that leverage email and multiple applications that need to trigger “email events” (a fancy way of saying “send an email to someone”). Email events can be transactional or promotional. If you design things correctly, you can also use the Email API to orchestrate email “blasts” with your promotional content. Sending your emails through a consistent process flow will give your organization better control and visibility. B. Process the Event A Lambda function is triggered by an object loaded into the S3 Bucket or published to the SNS Topic. The Lambda generates a unique ID for this request so emails can be traced back to the originating request. Next, the Lambda parses the object and adds any additional configurations. For example, this step can automatically or programmatically add custom arguments, categories, a bcc (for archival purposes), or anything required to standardize your sending flow. Once properly configured, the new object is saved into the /requests folder in the S3 bucket. A key to a successful SendGrid implementation is making sure that you are tracking all relevant data points. This “process event” step is where you can inject and format data before you make the API call to SendGrid. This will allow you to receive that same data back to track email events (opens, clicks, delivered, spam, etc.). It is important to note that saving all requests into this S3 bucket before they are sent to SendGrid is an efficient (and cost effective) way to save a record of all of your outbound emails. While these are not copies of the emails, they do contain all of the relevant details. This could address keeping records for compliance or archival purposes. You can easily configure S3 Lifecycle Policies to delete any object (uploads, requests, and responses) saved to this S3 bucket after a defined period of time. C. Make the API Call The send-email Lambda function is triggered by the JSON object loaded into the /requests folder of the S3 bucket. This Lambda opens the object, injects a timestamp for additional tracking purposes, and then executes the API call to SendGrid. 4. Handle the Response The API call will return a response which can also be saved to S3 or sent to an SNS topic for additional processing. Every successful response from SendGrid contains an x-message-id which will help identify any events generated by the email (open, click, unsubscribe, delivered). With the unique ID generated in step 2 to identify the request and this x-message-id to identify the actual email, you now have everything you need to track and analyze your emails! If you have read this far, then I think you are ready to see how you can spin this up yourself! Let’s get started... AWS Resources All of the AWS components of the serverless application are provided as "Infrastructure as Code" (oft-shortened to IaC) and deployed via CloudFormation into AWS. Here is an overview of the components: AWS SAM => an open-source framework that enables you to build serverless applications on AWS AWS CLI => the command line interface for AWS. It’s not required, but recommended because it simplifies credentials when deploying AWS Lambda => serverless compute service S3 => Object Storage SQS => Simple Queue Service SNS => Simple Notification Service Prerequisites This is not a beginner level build! You should have some knowledge of AWS, serverless computing, and programming. Twilio SendGrid Account. If you don’t yet have one, you can sign up for a free account here. An authenticated domain and a verified sender in your SendGrid account. AWS Account with permissions to provision Lambdas, step functions, S3 buckets, IAM Roles & Policies, an SQS queue, and SNS topics. You can sign up for an account here. AWS CLI installed with AWS credentials configured. AWS SAM CLI installed. Node.js installed on your computer. Let’s Build it! Here are the basic steps of our build today: Download the code Get and set your SendGrid Account API Key ID and API Key Deploy the stack Try it out! 1. Download the Code for this Application Download the code from this repo, and then open up the folder in your preferred development environment. The repo contains all you need to spin up an AWS CloudFormation stack. First we need to install a couple of node packages. From the parent directory, cd into the two directories listed below and install the packages. Here are the commands: $ cd layers/layer-sendgrid/nodejs $ npm install $ cd ../../layer-nanoid/nodejs $ npm install $ cd ../../.. The first part of the command should look like this: Now we are ready to enter our SendGrid credentials... 2. Enter your SendGrid Account Credentials Open up the file template.yaml in the parent directory. This yaml file contains the instructions needed to provision the AWS resources. In order to make an API call to SendGrid, you’ll need an API key with the proper “Mail Send” privileges. If you haven’t done so already, create a new API key and copy both the API Key ID and the actual API Key. This template lays the foundation to be able to use multiple API Keys AND it is best practice to include the API Key ID in your calls for reporting purposes. Here is a view of my API Keys page in my SendGrid console: In your template.yaml file, use a FIND function to search for SENDGRID_API_KEY_ID. The block of yaml code will look like this: Replace <YOUR-TEST-SENDGRID-API-KEY-ID> with your SendGrid API Key ID. Please note, as indicated in the comments, that entering for anything other than a quick proof of concept, use AWS Parameter Stores or Secrets Manager to store your credentials. Next, use a FIND function to search for YOUR-TEST-SENDGRID-API-KEY. That block of yaml code will look like this: As above, swap in your API Key value. Lastly, this AWS stack includes an “Alarm” topic that can send notifications out when certain errors occur. For example, if an API request is malformed, an error would be generated. You can optionally enter an email address to receive these error alerts. Use FIND and search for some-domain.com to reveal this block of yaml code: You can uncomment the last three lines and then swap in the desired email address. 3. Deploy Code With those settings in place, we are ready to deploy! From a terminal window, go into the parent directory and run: $ sam build This command goes through the yaml file template.yaml and prepares the stack to be deployed. In order to deploy the SAM application, you need to be sure that you have the proper AWS credentials configured. Having the AWS CLI also installed makes it easier, but here are some instructions. Once you have authenticated into your AWS account, you can run: $ sam deploy --guided --stack-name SendGridOutboundEmailMicroservice This will start an interactive command prompt session to set basic configurations and then deploy all of your resources via a stack in CloudFormation. Here are the answers to enter after running that command (except, substitute your AWS Region of choice): Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: SendGridOutboundEmailMicroservice AWS Region [us-east-1]: <ENTER-YOUR-AWS-REGION-OF-CHOICE> #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: y #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: y Disable rollback [y/N]: N Save arguments to configuration file [Y/n]: y SAM configuration file [samconfig.toml]: SAM configuration environment [default]: After answering the last questions, SAM will create a changeset that lists all of the resources that will be deployed. Answer “y” to the last question to have AWS actually start to create the resources. Previewing CloudFormation changeset before deployment ====================================================== Deploy this changeset? [y/N]: The SAM command prompt will let you know when it has finished deploying all of the resources. You can then go to your AWS Console and CloudFormation and browse through the new stack you just created. All of the Lambdas, Lambda Layers, S3 buckets, IAM Roles, SQS queues, SNS topics are all created automatically. (IaC – Infrastructure as Code – is awesome!) Also note that the first time you run the deploy command, SAM will create a samconfig.toml file to save your answers for subsequent deployments. After you deploy the first time, you can drop the --guided parameter of sam deploy for future deployments. 4. Send Some Emails! As shown in the architecture diagram, this system receives email “events” from your internal applications. These events need to be properly formatted JSON objects and they can be either loaded into S3 or published to SNS (Simple Notification Service). We can use SNS to send out some test emails. From your AWS Console, go to SNS and then click on the TOPICS link or tab on that page, and then select the SNS Topic that starts with SendGridOutboundEmailMicroservice-TwilioSendGridEmailEventTopic. Going through the AWS Console like this is mimicking how your enterprise applications can publish email events! You can then click the button that says PUBLISH MESSAGE. Back in the code base, open the directory called “sample-email-json/” and open the file simple-single.json. Copy the JSON and paste it into the Message body textarea back in SNS. It should look like this: Swap out a new email address for the “to” field and enter one of your verified sender email addresses in the “from” field and then click PUBLISH MESSAGE! Initiate with S3 Instead If you want to initiate the send flow using S3 instead of SNS, go into your S3 bucket and create a folder called uploads. You then need to upload a JSON file to that bucket to initiate the flow. 6. See the results... First and foremost, you should have received an email: That is straightforward enough, but let’s dig deeper. Go to S3 in your AWS Console and open up the bucket that starts with twilio-sendgrid-outbound-emails-. You will see folders for “requests” and “responses”. As you would expect, “requests” hold all of the json objects before they are sent to SendGrid, and “responses” hold the replies back from SendGrid. We will look at the requests first. In the requests folder, you will see a single JSON object that is labeled with the request ID generated by this stack. All requests sent using this system will have an object saved to this folder. Opening up a request, we can see that all of the information used to make the API call is stored in the JSON object: A few things to point out: This object contains all of the data and details in the outbound email and could satisfy the enterprise requirement to keep copies of outbound messages. Note that customArgs include the key:value pairs from the email event published in the step above plus additional key:value pairs. CustomArgs are the best way to add tracking data into your emails. Be sure to review the lambda file process-email-event to explore what you can add into your email api calls and how to do it! All customArgs will be included in Event Webhooks! You can, and should, include categories as well! The requestId is generated by this system and ties Event Webhook back to this specific request. Now let’s look at the response back from SendGrid. Go into the responses/ folder in the bucket and open the JSON file. There is nothing too exciting about this 202 response except for the x-message-id! That id will be included in Event Webhooks so it makes sense to take note of this id (save it) and connect it to the request (and the user/campaign/case/whatever). In addition to saving the response to the S3 bucket, the API response is ALSO sent to the SNS topic for additional processing. What additional processing could you do? Error handling: If any response is not a 202 success, handle it appropriately. Store the x-message-id in your database linking it to the request. Saving sending statistics for metrics, reporting, or dashboards, Bulk Email Sending Many enterprises want to send email “blasts” to many of their customers. Can this system be used for this purpose? Certainly. The SendGrid API can handle a tremendous amount of outbound emails and can help you scale to whatever throughput you require. You can build your enterprise system to send under these specifications by properly formatting the JSON objects sent into this flow. You could potentially put a queue in between the enterprise applications submitting email “events” if you wish to control throughput before entering this process. This microservice for calling the SendGrid Email API should give you the theoretical and practical foundation to build a system to send your emails at scale. Cleanup To avoid any undesired costs, you can delete the application that you created using the AWS CLI and the console. First, delete the S3 buckets. From the S3 “home” page, first empty the buckets by selecting the bucket and then select the EMPTY option. Next, delete the actual bucket by selecting the bucket and then the DELETE button. Next, delete the stack from CloudFormation in the AWS console. Select the DELETE STACK option. AWS SAM also creates a stack and an S3 bucket to manage deployments. You can delete them from CloudFormation in the AWS console following the same procedure as above. Deploy to production While you can get this system working pretty quickly, it is not ready for your production environment. If you bring this microservice to production, you’ll need to make some changes. Be sure to customize and refine this codebase for your needs, thoroughly test your version of this system in your environment, and configure tests, error handling, and anything else required by your development standards. Conclusion In short order you have the foundation for a serverless microservice for sending outbound email “events” to SendGrid’s API! The code is heavily-commented in the repo. Dive in and see how you can configure your enterprise systems to leverage this event-based microservice AND how you can use SendGrid’s robust tracking capabilities to get a much better sense of what you are sending and how your users are engaging with your emails. Speaking of engaging with your emails, be sure to check out parts 2 and part 3 of this blog series to learn about SendGrid Event Webhooks AND Inbound Parse! *** Dan Bartlett has been building web applications since the first dotcom wave. The core principles from those days remain the same but these days you can build cooler things faster. He can be reached at dbartlett [at] twilio.com.
Company Name: Regal.io Founder Name(s): Alex Levin & Rebecca Greene LinkedIn: Regal.io Company Page Industry: Enterprise SaaS, AI-enabled telephone & sales automation Stage/Funding: Regal.io has raised $42.1 million through Seed and Series A funding rounds Location: New York, NY This is a post in a series of interviews with startup founders who have worked with Twilio Startup Labs, a program for developers at early-stage startups to learn how to build, prototype & demo on Twilio. We had the pleasure of working with Alex Levin & Rebecca Greene on their startup, Regal.io. Describe your company (Startup) journey in 160 characters or less (the original length of a text message) Regal.io is a phone & SMS outreach platform enabling $200M+/month revenue driven. Regal’s AI-enabled tools enable high-answering and high-converting funnels. Why did you start your company? Before founding Regal.io, we were executives at Angi (NASDAQ: ANGI) Handy (acquired by ANGI in 2018). While at Angi, our team successfully drove top-of-funnel growth, but even after rolling out website optimization and email/SMS re-marketing, only about four of 100 customers would convert. We found that if we called the 96% “abandoned” customers and got them on the phone, they loved the attention, and converted at double the rate. This was the genesis of Regal.io. Knowing that legacy outbound contact center software only enables a "call more" or "call faster" strategy — which results in poor answer rates, low on-call conversion, and a terrible customer experience — we set about to develop a better approach. Regal uses real-time customer behaviors to personalize outreach, increase conversion, and boost revenue. Regal’s outbound phone and SMS sales solution empowers B2C brands to add a human touch at critical moments in their online experience, such as when customers “abandon” the sign-up or checkout flow. The solution helps win over customers, exceed expectations and drive more revenue. High-growth brands including Angi, Career Karma, Fidelity Life, RefiJet, Ro, SoFi, and The Farmer’s Dog use Regal to increase sales and improve customer retention. How are you building on Twilio? E.g., how do you use email, SMS, WhatsApp, Verify, etc. to communicate with your customers? We leverage Twilio’s APIs for programmable voice and conferences to power calls, transfers and voicemail drops. And we leverage Twilio’s programmable SMS and conversations service to power triggered SMS and agent-customer conversations. We’ve also built our routing rules which determine which customers speak with which agents, using Twilio’s task router as the backbone. What has the benefit or commercial impact been for your business since using Twilio? Twilio’s APIs and components allowed us to get to market faster. It’s so easy to build on top of, and we could launch much faster than if we built everything in house. It also enabled us to learn from Twilio’s experience and design paradigms while allowing us to focus our engineering resources on our key differentiators (rather than problems that had already been solved). By building on Twilio, what has the impact been for your customers? Our iteration times are faster, and customers benefit from that. When a customer requests a feature like barge or pay with IVR, we’re already 50% of the way toward delivering the feature, even if it wasn’t on our roadmap. Are there any future Twilio Products that you plan to integrate into your startup, and if so, why? We are starting to experiment with Twilio’s Answering Machine Detection service as we build out more sophisticated dialers to automatically detect when a voice mail vs. human answers a call - in order to save agents time and avoid them listening to dial tones and voicemails. Do you have a recording of your DEMO and how you are using Twilio that you would like to share below? What is the best piece of advice you would give to founders who are looking to build on Twilio? Don’t spend too much time thinking about whether to build something yourself or use a Twilio solution; the answer is if Twilio makes it easier and faster to build, just go with that - even if it’s not exactly how you would have thought about the solution. It will allow you to focus your engineering time on your differentiation and validating the core hypotheses of your business. Otherwise, building voice and SMS products is very time-consuming, technical, and hard to test and debug - don’t underestimate how challenging it is and how much Twilio abstracts away the hard parts. What excites you most about being a founder? We have always believed that when you make customers successful, they will make you successful. We built the company on our value of ‘Customers are Royalty,’ and it’s a major part of our success. Since the company’s founding in 2020, Regal has driven over $2 billion in revenue for its customers. The platform has powered more than 50 million customer conversations, helping more than 150 brands drive an average 25%+ lift in revenue. We’re incredibly proud of the company we have built! We remain excited about fulfilling our mission of treating our customers like Royalty. Thank you for reading, if you have any questions about Twilio Startup Labs or how you can get involved, please contact Frank Y. Owusu on fyawowusu@twilio.com.
Company Name: Posh AI Co-Founder Name(s): LinkedIn: Posh AI Company Page Industry: Fintech Stage/Funding: Series A $32 Million USD Location: Boston, USA This is a post in a series of interviews with startup founders who have worked with Twilio Startup Labs, a program for developers at early-stage startups to learn how to build, prototype & demo on Twilio. We had the pleasure of working with Karan Kashyap & Matt McEachern on their startup, Posh AI. Describe your company (Startup) journey in 160 characters or less (the original length of a text message) Founded in 15 at MIT. Pivoted to AI in '18. Grew to 100+ staff, 60 clients, $32M raised. Posh is modernizing banking with conversational virtual assistants. Why did you start your company? We originally started Posh as sophomores in 2015. We positioned Posh as a consulting company to work on cool, real-world software & AI challenges while we wrapped up our degrees. Once we graduated, we pivoted towards current-day Posh in 2018, after we noticed the perfect balance of promise and skepticism in conversational automation. Language is hard-wired into all of us - it’s a stretch to even call it a human invention. Popular culture and Sci-Fi hinted that language [and conversation] would become a dominant form of HCI. It’s exciting to play a part in making this a reality. We are also passionate about applying our tech to improve the negative stigma that shrouds the customer service industry, especially in banking. How are you building on Twilio? E.g., how do you use email, SMS, WhatsApp, Verify, etc. to communicate with your customers? We primarily use Programmable Voice. Our conversational virtual assistants are deployed on the telephone. Callers are greeted by an audible welcome message, and they’re free to simply ask for what they need. Programmable Voice is the technical bridge between the user’s phone input and our backend. We also leverage SMS as means of sending textual follow-up questions whilst users are interacting with the assistant over the phone. For example, an SMS with branch location information is much more helpful than listing that information over the phone. We leverage Media Streams for Voice Authentication. What has the benefit or commercial impact been for your business since using Twilio? We’ve used Twilio since the beginning of our business. It allowed us to rapidly build an MVP product that our market found valuable. And we’ve iterated, polished, and scaled upon it since. By building on Twilio, what has the impact been for your customers? Clunky touch-tone menus at 1-800 numbers are very quickly becoming a thing of the past. Customers’ standards for good product experiences, especially in customer service, have never been higher. We’re in a new era of intelligent virtual assistants. And Twilio’s API suite is the platform upon which delightful, voice-driven, “ask me anything” - style automation can be built. Are there any future Twilio Products that you plan to integrate into your startup, and if so, why? We’re excited to go deeper into Media Streams so that we can have more control over the Automatic Speech Recognition (ASR) and Voice Activity Detection (VAD) systems. What is the best piece of advice you would give to founders who are looking to build on Twilio? Twilio’s APIs are highly approachable. Speed is the name of the game. MVPs aim to capture a large bulk of value in as little time as possible. Twilio’s API’s can help you build that rapid MVP> But once you have Product-Market-Fit (PMF), Twilio’s APIs don’t stop there. They allow for a massive amount of intricacy and control, so that you can really polish your product UX, post-MVP. Twilio Support & Staff are also very helpful to understand edge-cases. What excites you most about being a founder? I (Matt McEachern) thrive on the uncertainty. So much of my Posh journey has been characterized by putting relentless effort behind uncertain bets. It’s exhilarating [for the whole team] when those bets pay off. Founder Biographies: Karan Kashyap Karan is Posh’s Chief Executive Officer and co-founder. His education at MIT included undergraduate and graduate degrees focused on artificial intelligence, where he was named a Siebel Scholar for his research work. He soon began AI and software consulting with enterprises across multiple industries, building a business foundation and establishing a pathway to the beginning of Posh. Posh has over 57 clients with over 100 products deployed across the banking and credit union space, with an ever-growing client base and employee roster. Outside of work, Karan has joyfully traveled across 20+ countries, plays various sports, including soccer, golf, and squash, and enjoys cooking. He lives in Cambridge, MA Matt McEachern As Posh’s Chief Technology Officer and co-founder, Matt has always been passionate about problem solving with conversational AI. His skillset was forged at MIT which is where he first met co-founder and CEO, Karan Kashyap during their freshman year. As co-founder and CTO, he has revolutionized AI with customer service in the banking sector working to ensure the functionality and implementation of AI tech for Posh. Additionally, he is passionate about making Posh a great place to work, especially for self-starters. Matt’s hobbies and interests include pickup sports, theoretical physics, and investing. Thank you for reading, if you have any questions about Twilio Startup Labs or how you can get involved, please contact Frank Y. Owusu on fyawowusu@twilio.com.
As a Twilio Solutions Architect, I have worked with many enterprises to help them develop strategies for leveraging Twilio’s industry-leading APIs. Frequently, those discussions have centered around a “Communications Layer,” or a “Communications Hub,” where an enterprise could centralize how their applications call Twilio to execute on Messaging sends. Centralizing messaging traffic, visibility, analytics, compliance, security, and infrastructure makes a ton of sense for enterprises. In addition, an Enterprise Communication Layer can lead to significant cost savings from consolidating volumes, choosing cost effective channels, prioritizing message types, and reducing internal support costs. No two enterprise systems are the same, but they do share many key concepts. In this article, I’ll introduce a blueprint for an Enterprise Communications Layer that shows how to leverage Twilio’s APIs to maximize your enterprise’s Messaging strategy. Towards an Enterprise SMS Communications Layer It is important to point out that while sending an SMS to a customer is the key objective, this blueprint also addresses security, compliance, error handling, inbound messages (2-Way), and engagement feedback. These additional topics are equally important to the overall success of your enterprise SMS strategy. We can begin with the entire blueprint and then break down notes for each section: This image is a little small, so it may be helpful to open the image in a separate browser tab or download the PDF. We will expand on each numbered note to explain the blueprint and the complete enterprise SMS lifecycle. 1. Enterprise Applications Trigger Message Events This whole process starts with the need to send an SMS message. (For the purpose of this blog post, we will say “SMS,” but everything also applies to MMS as well as to WhatsApp!) Any Enterprise Application that needs to send an SMS message can send events to an internal API in the Enterprise Communication Layer. First and foremost, implement authentication and authorization on this internal API per your process and network requirements. Through this step, you will standardize how your enterprise sends SMS messages. 2. Process Message Event Once a message event passes your internal API, it can be processed before being sent to the Twilio Integration. Steps can include: Pull required evaluation data from your CDP (check out Twilio Segment!) or other data sources. Determine whether the message event can proceed. Set the parameters for API calls to Twilio (Twilio account, sender, priority, and more). Send to an internal queue before calling the Twilio API. The internal queue could address both rate limits on the throughput for your senders, as well as the prioritization of your messages. For example, transactional messages could be sent out before promotional messages already in your internal queue. 3. Trigger API Call to Twilio Messages released from the queue are sent to your Twilio Integration. You can leverage the Twilio Programmable Messaging API (with quickstarts available in many popular programming languages), to build an integration to meet your requirements. There are many examples that show you how to properly format your message events into API calls to Twilio. Twilio has many endpoints (and SDKs) that you can learn about in our world-class docs, but for sending SMS messages, you will hit our Messaging API. API calls to Twilio are REST calls made over HTTPS using an API Key for the Twilio Account that you want to use for that call. You have the option to enable and use Twilio Public Key Client Validation to secure your API calls made TO Twilio. This integration can also take advantage of special Twilio features such as Link Shortening and Tracking, Message Scheduling, Content API, and the upcoming Message Tagging. 4. Twilio Accepts and routes API Call Twilio’s Messaging API will receive your API calls, and if they are valid, route them to the corresponding Twilio Account or Subaccount. Invalid API calls will return an error code and message. Your Twilio integration needs to handle errors returned on your API calls to Twilio. You can read more about Twilio Accounts in the docs, but I like to explain them as containers for resources that you can provision in Twilio. For example, a resource provisioned in a Twilio Account (or Subaccount) could be a phone number used to send SMS messages. It makes sense to have Twilio Accounts (and Subaccounts) to match your business units and use cases and to follow best practices to separate your concerns. When you make an API call to Twilio, the API Key you use will specify the intended Twilio Account (or Subaccount) you’d like to use, and the Messaging API will route your request to the correct Twilio Account. 5. Twilio Account / Subaccount Processes Message For SMS use cases, you will need to provision the appropriate Senders to meet your branding, regulatory, and throughput requirements. Sender types include Short Codes, Long Codes, Toll Free, and Alphanumeric Sender IDs. Individual Senders can be configured individually or they can be added to a Messaging Service. Messaging Services are out of scope for this blog post, but they are a powerful tool that add important functionalities such as advanced opt-out management, automatic sender selection, sticky sender, sender pools, and more. While planning your Twilio configuration you should certainly be aware of Messaging Services and – most likely – use them! Twilio will manage a queue of messages based on the available throughput of your senders in each Account or Subaccount. In other words, queue size for each Twilio account will likely be different depending on the senders available. The queue for each Account will hold up to 4 hours worth of messages at the max throughput. Read more: Understanding Twilio (SMS) Rate Limits and Queues. 6. Twilio TrustHub Addresses Compliance We cannot move on to sending SMS messages without talking about compliance! Over the last few years, the SMS ecosystem has shifted dramatically across global markets, especially when it comes to messaging compliance. The carriers, industry partners, and Twilio have pushed to create safe and trusted messaging for consumers while combating bad actors and SPAM. Twilio TrustHub provides a central place for managing registration and verification details that are needed for sending complaint messages across the globe. All SMS Senders require registration and verification, and Twilio’s resources will help you navigate the processes and keep you in good standing with the carriers. You can find Regulatory Compliance under the Phone Numbers on this page in the Twilio Console. 7. Super Network Routes to Carrier Messages released from the Account Queue in step 5 are sent to the Twilio Super Network. The Super Network will find the optimal route to the intended carrier and deliver the message. Twilio’s Super Network is a differentiator and will allow your enterprise to confidently send SMS messages globally at scale. 8. Feedback Sent Back to Communications Layer Sending SMS (and WhatsApp and MMS) is certainly the goal of the system, but knowing what happened to those messages is a crucial part of a successful solution. The Twilio Super Network will deliver your messages via optimized routing to the destination carrier, and all data will be logged to your Twilio Accounts. In addition, you can receive feedback from those carriers as the message statuses change. With Twilio, you can configure Status Callbacks to receive those status change updates in real time and process them in your data systems. In addition to Status Callbacks, Twilio offers Event Streams as another way for your organization to receive this data. All of this data generated by these customer engagements are extremely valuable and are definitely worth building into your Enterprise Communications Layer from the start! Feed this data back into your CDP, your CRM, and your other data stores and analytic tools. Check out Twilio’s Customer AI to learn more about how Twilio plans to help customers use data like this to combine living customer profiles and real-time engagement data with artificial intelligence to better serve your customers. Many enterprises do not want TO phone numbers or message bodies saved to external log files. Fortunately, Twilio has a solution. Message Redaction, available in Twilio Editions, can configure your Twilio Accounts to redact the TO number and message body before messages are persisted to Twilio logs. In other words, with Message Redaction ON, we do not save this personal data in our logs. We keep the unredacted message data only long enough to be sure that the carrier has processed the message. This differentiating solution should effectively address security and privacy concerns for many enterprises. Security is always top of mind at Twilio. You can be sure that each request coming from Twilio is valid by inspecting the X-Twilio-Signature. In addition, via Static Proxy available in Twilio Editions, you can leverage an allowlist of static IP addresses that Twilio webhook responses will come from. If you want an idea on how you could build a feedback handler, check out this blog: Twilio Serverless Status Callback Handler built in AWS. 9. Inbound Messages For 1-Way use cases, handling inbound messages from your end users may seem less important. However, at a minimum you will need to address industry standard keywords like STOP, HELP, and START. Twilio has some built in functionality to help you handle these keywords and remain compliant. You are able to customize how you handle these keywords, but, again, I want to point out that Twilio Messaging Services has built-in functionality available. Many Twilio messaging customers utilize 2-Way messaging use cases. In fact, we see that most consumers expect to be able to reply back to messages sent from enterprises. If you are considering just 1-Way messaging, you need to consider how best to handle replies back from your customers. With Twilio, enterprises can put in place short term solutions now while knowing that Twilio will be able to support your long-term plans with our powerful messaging solutions. Each Sender can have a handler configured so that Twilio knows how to deal with inbound messages. Alternatively, the inbound handler can be set on the Twilio Messaging Service. The inbound handler can be set to send messages to our powerful GUI-based orchestration tool Twilio Studio, or even to our industry-leading contact center solution Twilio Flex. Twilio Conversations is another Twilio platform option you should consider! If you want to handle inbound messages using your Enterprise Communications Layer, you would configure the handler for the Sender (or Messaging Service) to invoke a webhook that posts the inbound message to a HTTPS destination of your choosing. Using webhooks to handle your inbound messages will give you ultimate control on how your applications interact with your customers. 10. Inbound Messages Delivered to Webhook These webhooks created by inbound messages are secured via the same options as the Status Callbacks (X-Twilio-Signature and optionally Static Proxy available in Twilio Editions). The payloads of these inbound webhooks contain the core message details like message body and from number plus additional context available from the calling Twilio account. Your enterprise will need to build out all of the application and business logic in your handler to manage these inbound messages. It will likely be necessary to keep track of state and context if you plan to reply with meaningful messages and/or enable conversations. You will want to save these interactions back into your CDP (check out Twilio Segment!), CRM, or other data sources as you enable your data-driven flywheel! Should you choose to reply back to any inbound message, you would simply start back at Step 1 of this blueprint. If you want an idea on how you could build an interactive application, check out this blog post: Build an Interactive Serverless Voice and Messaging Application using Twilio and AWS. Conclusion This blog post is intended to give enterprises an overview of how Twilio can help implement an Enterprise-Grade Messaging Solution that uses a Communication Layer to consolidate all outbound and inbound messages, as well as a mechanism for collecting engagement feedback. This blog post focused on SMS, but your Enterprise can build a Communication Layer to include WhatsApp and MMS. Twilio has first class support for WhatsApp, including a powerful template management system in the Content API. Each one of the 10 steps in this blueprint are crucial and merit consideration and planning. Your Twilio Enterprise Account Team is ready to help you walk through this blueprint and get you set up for success. Dan Bartlett has been building web applications since the first dotcom wave. The core principles from those days remain the same but these days you can build cooler things faster. He can be reached at dbartlett [at] twilio.com.
Company Name: Vibehut.io Founder Name(s): Chris Dolinski LinkedIn: Chris Dolinski LinkedIn Industry: Software, Video Stage / Funding: 1M USD Location: Toronto HQ, Global Team, Delaware C Corp This is a post in a series of interviews with startup founders who have worked with Twilio Startup Labs, a program for developers at early-stage startups to learn how to build, prototype, and demo on Twilio. We had the pleasure of working with Chris Dolinski on their startup Vibehut.io. Describe your startup journey in 160 characters or less (the original length of a text message) Vibehut took the building block of a video call and evolved it into a platform for professionals, companies, and communities. Its branded video rooms are used to host calls, share resources, sell products, create content and monetize. Having a Vibehut room is the virtual equivalent to having an office on main street. People know where to find you, can see that you’re available and can stop by when they have an appointment or would like to talk. Why did you start your company? I started Vibehut because I had sold my previous company (Co-founded the Canadian version of Carvana, Clutch.ca) and was building projects around the opportunities I came across. I was messaging a number of global thought leaders online and quickly realized we were sharing ideas but the value of conversation wasn’t at the same level it was in real life. Vibehut was born as a platform that connects great people through video calls. “In real life” is the best. The dinners and events create an environment for talking about ideas, trying new experiences, and collaborating. Technology has drastically changed our world recently, and tooling needs to keep up. Most of us take it for granted, but high speed internet, programmatic video calls, and an online reputation means that remote work is the default and talent is burgeoning globally. COVID-19, although terrible, put boosters on the acceptance of the virtual world, and socializing has evolved. Deals now move fast, and it’s hard to be in Zurich, Toronto, and Hong Kong at the same time. Video calls lack many of the benefits of in-person, but they win in efficiency and speed. Vibehut stepped up to provide discoverability and a way to be a part of the web’s active community. How are you building on Twilio? E.g., How do you use email, SMS, WhatsApp, Verify, etc. to communicate with your customers? Vibehut is building on Twilio extensively and picked the APIs needed as a Michelin star chef picks their ingredients: SMS for notifications; Video for 1:1 and group calls; Composition to download in TikTok, YouTube, formats Spotify; Storage for downloading and displaying the recorded video. From these primitives, Vibehut has extended to offer our customers: PWA and browser accessible video calls; branded video call rooms with a unique persistent URL; global discoverability; room and user call counts and analytics; scheduled and monetized events; auto-organizing community calls; user profiles, linked to their other socials; direct messages, group messages with conversational AI support; AI agents and aggregated resources to provide asynchronous support to rooms. What has the benefit or commercial impact been for your business since using Twilio? Twilio has played an enormous role enabling Vibehut’s commercial impact and offering. Vibehut has hosted calls from people in 141 countries in the last 30 days. It has been the platform of choice for the virtual world. It’s possible and common to speak to someone in Africa, North America, South America, Asia, and Europe on the same day. Notably, Vibehut’s design makes it a preferred choice for instant sales calls and scheduled calls for virtual legal office hours, IETLS English speaking practice, founder community discussions, and podcast recording. Although I’m the founder, I’ve had 2913 calls in the last 2 years. I see it as building up conversational equity. An asset that I can point to saying, I’m happy to listen, talk, collaborate and provide feedback if asked. By building on Twilio, what has the impact been for your customers? Building on Twilio has been a boon for our customers. Twilio offers a robust API providing a reliable service for video conversations, notifications, and content creation. It has supportive documentation and excellent reporting to identify and track outages. These fundamental building blocks provide the confidence we need to pick Twilio as a service provider. See the article here that Byvi published about our IELTS video room and the impact it is providing for people practising to learn English. Are there any future Twilio Products that you plan to integrate into your startup, and if so, why? We are in the process of implementing Mux, previously Twilio Live (suggested Mux). We continue to explore the composition API and have spoken directly with the team about our customer needs there. Our dream is to have a Twilio video call, and on completion it automatically generates viral clips with captions. We already provide YouTube, TikTok, Spotify and ChatGPT call transcripts for our recorded videos. Have you enjoyed Twilio Startup Labs? And if so why? Twilio Startup Labs has been great to be a part of. I would like to thank both Brandon Leen from Twilio Ventures and Frank Yaw-Owusu from Twilio Startups for their time. Do you have a recording of your DEMO and how you are using Twilio that you would like to share? What is the best piece of advice you would give to founders who are looking to build on Twilio? The best piece of advice I would give to founders looking to build on Twilio is to get started and involve others. Whether you’re building as part of a team or an individual, it always helps to have a handle on how things work. When you see a new tool or API offered by Twilio, explore it, make a small app with the API yourself. The building blocks needed to build a billion dollar company are already available from the thousands of enterprise APIs available. They are waiting for you to piece them together, brand them and market your products and services. What excites you most about being a founder? What excites me most about being a founder is that I get to visualize what could exist, decide what we want to work on, and then bring it into existence. People have always been tool builders, it’s the best time in human history to build something of magnitude. Screenshots Thank you for reading, if you have any questions about Twilio Startup Labs or how you can get involved, please contact Frank Y. Owusu on fyawowusu@twilio.com.
In the world of web development and network security, tunneling services have become an essential tool for testing and deploying applications. As developers, we all appreciate the value of a streamlined and secure development process. But what if you need to test or promote your locally hosted applications to the rest of the world? This is where tunneling services come in handy. Tunneling services enable developers to share their work, engage with team members, and even test and deploy webhooks and APIs without the need for public infrastructure by establishing encrypted connections between local PCs and remote servers. So basically, the ability to expose your localhost:port to the world. In this blog post, we will discuss three popular tunneling services: ngrok, Cloudflare Tunnel, and Tailscale. ngrok ngrok is a tunneling service that creates a secure, encrypted tunnel between a local machine and a public internet address. It allows you to expose a local web server to the internet and access it through a public URL. ngrok supports HTTP, HTTPS, and TCP protocols and can be used to test webhooks, APIs, and other applications that require a public URL. One of the advantages of ngrok is its ease of use. You can download the ngrok binary and start tunneling your local applications within minutes. ngrok offers a free plan that provides a limited number of tunnels and connections. Paid plans offer more advanced features, such as custom domains, reserved addresses, and team management. ngrok provides a downloadable binary that can be used to create tunnels. On macOS, this installation is easy: brew install ngrok/ngrok/ngrok Here's an example of how to create an HTTP tunnel to a local web server running on port 3000: ./ngrok http 3000 This command will start ngrok and create a public URL that can be used to access the local web server. ngrok provides a web interface at http://localhost:4040 that allows users to inspect incoming requests, replay requests, and analyze traffic. Cloudflare Tunnel Cloudflare Tunnel is a tunneling service offered by Cloudflare, a leading content delivery network (CDN) and cybersecurity company. Cloudflare Tunnel allows you to create secure connections between your local dev machine and Cloudflare's global network of servers. With Cloudflare Tunnel, you can expose any TCP service running on the local machine to the internet. This includes web servers, databases, and other network services. Cloudflare Tunnel also supports HTTP and HTTPS protocols and can be used to test web applications and APIs. One of the advantages of Cloudflare Tunnel is its integration with other Cloudflare services. You can use Cloudflare's firewall and other security features to protect your applications from malicious traffic. Cloudflare Tunnel also provides real-time logs and analytics to monitor traffic and performance. It offers a free plan that provides a limited number of tunnels and connections. Paid plans offer more advanced features, such as custom domains, SSL certificates, and advanced analytics. Cloudflare Tunnel requires you to create an account and install the Cloudflare CLI. And before we can move further, you will also need a Cloudflare account with a zone. After creating the account and zone, here's how to create a tunnel to a local web server running on port 3000: Install Cloudflare CLI brew install cloudflare/cloudflare/cloudflared Authenticate cloudflared cloudflared tunnel login Running the above command will: Open a browser window and prompt you to log in to your Cloudflare account. After logging in to your account, select your hostname. Generate an account certificate, the cert.pem file, in the default cloudflared directory. Create a tunnel cloudflared tunnel --url http://localhost:3000 This command will start Cloudflare Tunnel and create a public URL that can be used to access the local web server. You can confirm that the tunnel has been successfully created by running: cloudflared tunnel list Cloudflare Tunnel provides real-time logs and analytics that can be viewed in the Cloudflare dashboard. Tailscale Tailscale is a mesh networking service that allows you to create secure, private networks between your devices. Tailscale uses WireGuard, a modern VPN protocol, to create encrypted tunnels between devices. With Tailscale, you can access resources on your local network from anywhere in the world. This includes file shares, printers, and other network services. It also provides a secure remote access solution for teams, allowing members to access resources on the company network from anywhere. One of the advantages of Tailscale is its ease of use. You can install Tailscale on your devices and connect to your private network within minutes. It also provides a web interface that allows you to manage your devices and access controls. Tailscale offers a free plan that provides up to 100 devices and 10GB of data transfer per month (as of 31 Aug, 2023). Paid plans offer more advanced features such as custom domains, user management, and advanced networking. Tailscale provides a downloadable binary that can be used to create secure networks between devices. Here's an example of how to install and configure Tailscale: Download Tailscale from this link Accept any prompts to install VPN configurations that may appear Click Log in from the Tailscale menu bar item and authenticate in your browser This will install Tailscale on macOS and create a secure network between devices. Tailscale provides a web interface that allows you to manage devices and access controls. Also, Tailscale Funnel is off by default and double opt-in. To enable Tailscale Funnel follow these instructions. In order to expose a local port to the internet, you can use Tailscale Funnel. Open your preferred shell and run the following commands: By default, no alias is set up. If you plan on frequently accessing the Tailscale CLI, you can add an alias to your .bashrc, .zshrc or shell config to make it easier. alias tailscale="/Applications/Tailscale.app/Contents/MacOS/Tailscale" Expose a local server running on localhost:3000 with tailscale: tailscale serve https / http://localhost:3000 tailscale funnel 443 on Run the following command to see the Funnel status: tailscale funnel status The output should look something like: # Funnel on: # - https://yourmachinename.tailxxxx.ts.net https://yourmachinename.tailxxxx.ts.net (Funnel on) |-- / proxy http://127.0.0.1:3000 Tailscale also offers a client for your mobile devices, available to download on both iOS and Android devices. Comparison Now let’s compare the free plan of all three tunneling services and see what they offer, as of 31 August, 2023: ngrok, Cloudflare Tunnel, and Tailscale are three popular tunneling services that provide different features and capabilities. ngrok is a simple and easy-to-use service that allows users to expose local web servers to the internet. Cloudflare Tunnel provides advanced security features and integration with other Cloudflare services. Tailscale is a mesh networking service that allows users to create secure private networks between their devices. Each of these services has its advantages and can be used to solve different network security problems. My preference is Tailscale because of the extensive features. I personally love the ability to securely access my personal network from anywhere in the world and the possibility of creating my own homelab. Take one (or multiple!) of the tunneling services for a spin! Don’t have something in mind to build? Try one of our quickstarts, for example How to set up your Python and Flask development environment. That tutorial helps you setup ngrok with Python and Flask, but to make it exciting try doing it with one of the other tunneling services mentioned above. I can’t wait to see what you build! Rishab Kumar is a Developer Evangelist at Twilio and a cloud enthusiast. Get in touch with Rishab on Twitter @rishabk7 and follow his personal blog on cloud, DevOps, and DevRel adventures at blog.rishabkumar.com