Code-Tutorials-and-Hacks
57 postsThe Twilio Voice SDK is a powerful tool that empowers developers to integrate voice calling capabilities seamlessly into their applications. However, in the pursuit of perfection, even the most finely tuned applications can encounter disruptions. This is where the significance of comprehensive browser logs comes into play, proving to be a game-changer when troubleshooting issues related to the Voice JS SDK. In this comprehensive guide, I will lead you through a step-by-step process to configure logs on the debugging level and effectively gather browser logs, with a specific emphasis on troubleshooting using the Twilio Voice JS SDK. Prerequisites In order to follow along, you’ll need: Google Chrome or Microsoft Edge browser Mac/Windows OS Voice JS SDK client withlogLevel set to debug, as described below Check your logLevel The console log level that the Voice JS SDK client uses is a setting that is configured using the loglevel based logger to allow for runtime logging configuration. Read more about best practices here. To configure the loglevel, use the logLevel property in the DeviceOptions object when instantiating a Twilio.Device. Refer to the example below: // when instantiating a Twilio.Device const device = new Twilio.Device (token, { logLevel: 1 }); Utilize browser-based debugging The Twilio Voice JS SDK is optimized for a seamless experience on supported desktop browsers such as Google Chrome and Microsoft Edge. Both step-by-step processes will be demonstrated in this article. Whether you're using Chrome or Edge, the steps are the same, ensuring that you can leverage the power of browser logs regardless of your preferred browser. Set up and start collecting browser logs Step 1: Open the Chrome menu and navigate to Developer Tools. You can open Developer Tools with keyboard shortcuts or through the Chrome menu. The keyboard shortcuts are as follows: Mac OS: CMD+Shift+J or CMD+Shift+C Linux, Chromebook and Windows: Ctrl+Shift+J From the Chrome menu: Open the Chrome menu and go to More Tools > Developer Tools. Step 2: Click on the settings icon in the upper right corner of the Developer Tools window. Step 3: Select the following checkboxes Under Console Log XMLHttpRequests Show timestamps Preserve log upon navigation Under Network Preserve log Record network log Step 4: Close the window. Step 5: To select a filter level on the Console tab, locate the Log level list at the top right corner of the window. Choose any filter level that is not currently selected. If you want to select all levels, the log level selection will display as All levels. Step 6: To collect network logs during issue reproduction, select the menu and anchor the window at the bottom by selecting the icon with the smaller box on the bottom next to the Dock side option. It's important to note that the network logs for the session will be lost if you close this window or the tab. Step 7: Keep the console open and replicate the steps necessary to reproduce the issue being investigated. Step 8: After reproducing the issue, navigate to the console tab and gather the log files. Simply right-click on the log lines and select "Save As" to save them. Step 9: Similarly, go to the Network tab and click on the export download icon to save the network logs Step 10: You might be able to identify and fix the problem yourself from the logs. Alternatively, you can upload the log file to a support ticket along with the approximate timestamp when the issue occurred and the call SID as a reference to troubleshoot the issue. Check to ensure that the logs encompass the timeframe of the reported issue. Step 11(Optional): If you have not yet opened a support ticket for the issue, please follow the instructions provided in this link to create and submit a support ticket for assistance. What's next for troubleshooting Voice JS SDK issues? Understanding how to collect and analyze browser logs is a valuable skill for troubleshooting Voice JS SDK issues and will save time if you file a support ticket. By following the steps outlined in this article, you can effectively capture and examine browser logs, gaining deeper insights into potential errors, performance bottlenecks, and other issues affecting the user experience. Furthermore, you have the opportunity to enhance your troubleshooting capabilities by referencing the Voice Javascript SDK best practices. Following these best practices will ensure your users have a seamless calling experience. It will also make it convenient to troubleshoot connection and call quality issues. Khushbu Shaikh a dedicated Lead Technical Account Manager who is an invaluable asset to the personalized support team. With a wealth of experience, Khushbu excels in working with numerous accounts, diligently assisting them in overcoming challenges and providing effective solutions. Her expertise lies in troubleshooting customer issues, offering insightful workarounds, and delivering exceptional support. For any inquiries or assistance, Khushbu can be reached at kshaikh [at] twilio.com.
One-time passwords (OTPs) provide an additional layer of security by generating temporary, unique codes that users must enter to complete specific actions or access particular features. While text messages serve as a common method of OTP delivery, voice-call OTP verification is also gaining popularity, due to its accessibility and reliability. When used, the user receives a voice call that verbally provides their verification code, which they subsequently enter into the application. In this tutorial, you will learn how to integrate voice call OTP verification into a Symfony application to verify user logins using Twilio Programmable Voice. This will be achieved by creating registration and login pages, along with a verification page for users to enter the received OTP, enabling them to gain access to the application. Prerequisites To follow along with this tutorial, you will need the following: PHP 8.1 or higher (ideally PHP 8.3) Composer installed globally The Symfony CLI A Twilio account with an active phone number. Click here to create a free account. Be aware that, if you are using Twilio's free trial, you will need to upgrade your account to be able to make calls to any unverified non-Twilio phone number. Access to a MySQL database Basic knowledge of Symfony and Doctrine would be helpful, though not essential Create a new Symfony project Let's get started, by creating a new Symfony web project using the Symfony CLI. You can do this by running the following commands: symfony new my-project --version="6.3.*" --webapp Then, after the installation is complete, run the commands below to start the application server: cd my-project/ symfony server:start Once the application server is started, open http://localhost:8000/ in your preferred browser and you'll see the application's home page, as shown in the screenshot below. Set up the database To connect the application to the MySQL database, you must first install Doctrine. You can install it by running the following commands in your terminal: composer require symfony/orm-pack composer require --dev symfony/maker-bundle Now, to configure the database connection, open the .env file in the project's root directory. Then, comment out the existing DATABASE_URL setting. After that, add the following configuration to the end of that section (doctrine/doctrine-bundle). DATABASE_URL="mysql:/</db_user>:<db_password>@127.0.0.1:3306/<db_name>" In the configuration above, replace <db_user> with the database's username, <db_password> with the database's password, and <db_name> with the name of the database. Then, to create the configured database, if it's not already provisioned, run the command below in your terminal. php bin/console doctrine:database:create Create the required entities An entity is a PHP class that represents a database table schema, mapping its properties to table columns for effective interaction between the application and the database. Now, let's create an entity class and add fields to the database table using the make:entity command below. Running it will prompt you with questions about the new entity class. php bin/console make:entity The above command will prompt you to enter the entity name and then set the database table schema for the entity. Answer the prompts as shown in the screenshot below. Next, run the following commands to add and update the created entity fields to the database. php bin/console make:migration php bin/console doctrine:migrations:migrate Create a Form Type A form type is a class that defines the structure and behavior of a form in a Symfony application. It specifies the fields, their types, validation rules, and other attributes of the form. Before you create the routes for the registration, login, and verification pages, let's first create the form types for both registration and login. The form type is used to define the form fields. To create a registration form type, run the command below in your terminal. php bin/console make:form RegistrationType Running the above command will prompt you to enter the entity's name. Enter the entity name UserInfo that you created earlier. Now, in your code editor, if you navigate to the src/Form folder and open RegistrationType.php, you will find that it looks like the following. <?php namespace App\Form; use App\Entity\Userinfo; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolver; class RegistrationType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options): void { $builder ->add('fullname') ->add('phone') ->add('username') ->add('password') ; } public function configureOptions(OptionsResolver $resolver): void { $resolver->setDefaults([ 'data_class' => Userinfo::class, ]); } } Now, let’s ensure that the user’s password is not exposed. To do so, in the code above, update the buildForm() method to match the followings: public function buildForm(FormBuilderInterface $builder, array $options): void { $builder ->add('fullname') ->add('phone') ->add('username') ->add('password', PasswordType::class); } Then, add the following use statement to the top of the file. use Symfony\Component\Form\Extension\Core\Type\PasswordType; Next, let’s create a login form type, using the same process above, by running the command below. php bin/console make:form LoginFormType When prompted, enter your entity name, input UserInfo. In the src/Form folder, open the generated LoginFormType.php file and replace it with the following code to remove all unnecessary fields. <?php namespace App\Form; use App\Entity\Userinfo; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolver; use Symfony\Component\Form\Extension\Core\Type\PasswordType; class LoginFormType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options): void { $builder ->add('username') ->add('password', PasswordType::class) ; } public function configureOptions(OptionsResolver $resolver): void { $resolver->setDefaults([ 'data_class' => Userinfo::class, ]); } } Create the authentication controller Let's create the controller for our application and install the HttpFoundation package to work with sessions in the application, by running the following commands in your terminal. php bin/console make:controller Authentication composer require symfony/http-foundation Now, navigate to the src/controller directory and open AuthenticationController.php. Then, define the application route and its logic by replacing the existing code with the following. <?php namespace App\Controller; use App\Entity\Userinfo; use Doctrine\ORM\EntityManagerInterface; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\RedirectResponse; use Symfony\Component\Routing\Annotation\Route; use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; use App\Form\RegistrationType; use App\Form\LoginFormType; use Symfony\Component\HttpFoundation\Session\SessionInterface; class AuthenticationController extends AbstractController { #[Route('/authentication/register')] public function register(Request $request, EntityManagerInterface $entityManager): Response { $Userinfo = new Userinfo(); $mxg = ""; $form = $this->createForm(RegistrationType::class, $Userinfo); $form->handleRequest($request); if ($form->isSubmitted() && $form->isValid()) { $entityManager->persist($Userinfo); $entityManager->flush(); return $this->redirect('login?mxg=Registration successful. Please login.'); } return $this->render('authentication/register.html.twig', ['message' => $mxg, 'form' => $form->createView()]); } } In the code above: All the necessary dependencies are imported The route for the registration page is defined, where form submission data is validated and then stored in the database The registration form template is loaded from authentication/register.html.twig To create the registration template, navigate to the templates/authentication directory and create a new file named register.html.twig. Inside the file, add the following code: {% extends 'base.html.twig' %} {% block title %}Register{% endblock %} {% block body %} <h1>Registration</h1> {{ message }} {{ form_start(form) }} {{ form_row(form.fullname) }} {{ form_row(form.username) }} {{ form_row(form.phone) }} {{ form_row(form.password) }}<br/> <button type="submit">Register</button><br/> <span>Already have an account,<a href="login">login here</a></span> {{ form_end(form) }} {% endblock %} Finally, in templates/base.html.twig, add the following block in the <head> section, so that the templates are styled properly. <style> body { font-family: Arial, sans-serif; text-align: center; } h1 { color: #333; } form { width: 300px; margin: 0 auto; padding: 20px; background: #f5f5f5; border: 1px solid #ccc; border-radius: 5px; } form div { margin-bottom: 17px; display: inline-block; width: 100%; } label { display: block; text-align: left; margin-top: 10px; font-weight: bold; } input[type="text"], input[type="password"] { width: 93%; padding: 10px; margin-top: 5px; border: 1px solid #ccc; border-radius: 5px; font-size: large; } button[type="submit"] { display: block; width: 100%; padding: 10px; background: #007bff; color: #fff; border: none; border-radius: 5px; cursor: pointer; font-size: large; } button[type="submit"]:hover { background: #0056b3; } </style> Now, save your work and open http://localhost:8000/authentication/register in your browser. Then, register a new account using your phone number (including your country code) as shown in the image below. Install the Twilio PHP SDK To interact with Twilio Programmable Voice, you need to install the Twilio SDK for PHP. Run the command below to do so. composer require twilio/sdk You can get your Account SID and Auth Token from your Twilio Console dashboard, as shown in the image below. Storing the access token in the .env file To ensure that the Twilio API credentials are well secured, let’s store them in the .env file. To do so, open the .env file in the project's root folder and add the following code. TWILIO_ACCOUNT_SID=<twilio_account_sid> TWILIO_AUTH_TOKEN=<twilio_auth_token> TWILIO_PHONE_NUMBER=<twilio_phone_number> In the code above, replace <twilio_account_sid>, <twilio_auth_token> and <twilio_phone_number> with your corresponding Twilio values. Add functionality for sending a verification code Let's create a function that will connect to Twilio Programmable Voice and send the verification code through a phone call to the user. Inside the src folder, create a Service folder. Inside this folder, create a file named TwilioService.php and add the following code to it. <?php namespace App\Service; use Twilio\Rest\Client; class TwilioService { public function sendVoiceOTP($recipientPhoneNumber, $otpCode) { $accountSid = $_ENV['TWILIO_ACCOUNT_SID']; $authToken = $_ENV['TWILIO_AUTH_TOKEN']; $twilioPhoneNumber = $_ENV['TWILIO_PHONE_NUMBER']; $client = new Client($accountSid, $authToken); $call = $client->calls->create( $recipientPhoneNumber, $twilioPhoneNumber, [ 'twiml' => '<Response><Say>Your OTP code is ' . $otpCode . '. Once again, your OTP code is ' . $otpCode . '.</Say></Response>' ] ); return $call->sid; } } From the code above, the sendVoiceOTP() function contains two arguments (the $recipientPhoneNumber and $otpCode). The OTP message to send a voice call is passed, wrapped in the Say element, which will say the code twice, before hanging up the call. Create the login controller Next, let's add a login route to our controller. This route will allow users to log in with their username and password. After logging in, an OTP will be sent to the user's registered phone number as an additional authentication method. To do that, open AuthenticationController.php and add the following function to it: #[Route('/authentication/login')] public function login(Request $request, EntityManagerInterface $entityManager, SessionInterface $session, TwilioService $twilioService): Response { $Userinfo = new Userinfo(); $OTP = null; $mxg = $request->query->get('mxg'); $form = $this->createForm(LoginFormType::class, $Userinfo); $form->handleRequest($request); if ($form->isSubmitted() && $form->isValid()) { $username = $form->get('username')->getData(); $password = $form->get('password')->getData(); $repository = $entityManager->getRepository(Userinfo::class); $login = $repository->findOneBy([ 'username' => $username, 'password' => $password, ]); if ($login !== null) { $phone = $login->getPhone(); $OTP = random_int(10001, 90009); $session->set('otp', $OTP); $session->set('phone', $phone); $twilioService->sendVoiceOTP($phone, $OTP); return new RedirectResponse($this->generateUrl('verify')); } else { $mxg="Invalid login Username/Password"; } } return $this->render('authentication/login.html.twig', [ 'form' => $form->createView(), 'otp' => $OTP, 'message'=>$mxg, ]); } Then, add the following use statement to the top of the file. use App\Service\TwilioService; To create the login template, navigate to the templates/authentication folder and create a new file named login.html.twig. Inside the file, add the following code. {% extends 'base.html.twig' %} {% block title %}Login{% endblock %} {% block body %} <h1>Login</h1> <span>{{message}}</span> {{ form_start(form) }} {{ form_row(form.username) }} {{ form_row(form.password) }} <br/> <button type="submit">Login</button><br/> <span>New user? Create new account <a href="register"> here</a></span> {{ form_end(form) }} {% endblock %} Now, open http://localhost:8000/authentication/login in your browser. Log in with your username and password. You will receive a call from your Twilio phone number which will provide your verification code. Afterward, you will be redirected to the verification page where you can enter the OTP code. Add the verification controller Now, to add the verify route and check if the entered verification code is correct or not, add the following code to AuthenticationController.php. #[Route('/authentication/verify', name: 'verify')] public function verify(Request $request, SessionInterface $session): Response { if (null !== $session->get('phone')) { $otpFromForm = $request->request->get('otp'); $sessionOtp = $session->get('otp'); $sessionPhone = $session->get('phone'); $mxg = ""; if ($otpFromForm == $sessionOtp) { $mxg = 'Code verified successfully.'; } else { if ($otpFromForm == "") { $mxg = ''; } else { $mxg = 'Verification code is incorrect.'; } // return new RedirectResponse($this->generateUrl('dashboard')); } return $this->render('authentication/verify.html.twig', ['message' => $mxg, 'phone' => $sessionPhone]); } else { return $this->redirect('login'); } } Once the OTP is correct, you can redirect the user to your preferred route. Next, you need to create the verify template. Inside the authentication folder, create a file named verify.html.twig and add the following code to it. {% extends 'base.html.twig' %} {% block title %}Verify{% endblock %} {% block body %} <h1>Login verification</h1> <form method="post"> <span>You'll receive a call on {{phone}} with your verification code.</span> <br><br> Please enter the code below: <br><br> <span>{{message}}</span> <div> <input type="text" name="otp" required><br> </div> <button type="submit" >Verify</button> </form> {% endblock %} Now, save your work and log in to your account. You will receive a call from your Twilio phone number, providing your verification code to complete the login. Enter the OTP on the verification page as shown below. Next, enter the OTP and click on the Verify button. If the OTP is correct, you will see a success message, as shown in the image below. After verifying the OTP, you can redirect the user to their desired page, such as the dashboard, profile, or settings page. Conclusion In this tutorial, you learned how to integrate the Twilio Programmable Voice API into a Symfony-based application to serve one-time passwords, as an additional authentication factor for user logins. By leveraging this method of OTP delivery, users can receive one-time passwords via telephone calls and gain access to their application with greater security. Popoola Temitope is a mobile developer and a technical writer who loves writing about frontend technologies. He can be reached on LinkedIn.
Excel, the most popular spreadsheet software around, features a grid-based interface that allows users to organize, compute, and analyze data using rows and columns. It is extensively used for tasks such as creating budgets, managing financial data, tracking inventories, and conducting sophisticated computations. Excel also has extensive functionality for quickly manipulating and analyzing data. This includes formulae and functions for conducting computations, sorting and filtering of data, building charts and graphs, conditional formatting, pivot tables for summarizing and analyzing data, data validation to ensure data integrity, and much more. Data handling in web applications sometimes requires manipulating Excel files. There are powerful libraries available for Laravel to perform these operations effectively, whether importing data from Excel files into a database or exporting data from a database to Excel files. Assume for a moment that you operate a newsletter and require a list of your subscribers in a spreadsheet; Excel's functionality would be very useful. So, this tutorial series will give you a step-by-step understanding of Excel file handling in Laravel. It will cover everything from reading and writing Excel files to importing and exporting data, validating and sanitizing data, handling file uploads securely, error handling and reporting, along with some advanced Excel features and best practices for efficient development. Prerequisites PHP 8.2 or newer Access to a MySQL database Composer globally installed Your preferred text editor or IDE Prior knowledge of Laravel and PHP Set up the project To begin, you need to create a new Laravel project via Composer, navigate into the project folder, and start the application by running the commands below. composer create-project laravel/laravel excel-app cd excel-app php artisan serve Next, open the .env file (in your preferred IDE or text editor) to configure your database, replacing the applicable default configuration settings with the applicable values for your database. DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=<<YOUR_DATABASE_NAME>> DB_USERNAME=<<YOUR_DATABASE_USERNAME>> DB_PASSWORD=<<YOUR_DATABASE_PASSWORD>> After that, the next step is to install an Excel file-handling library. The Laravel community provides various libraries for interacting with Excel files, one of which stands out, which we will use in this tutorial, is Maatwebsite\Excel. Ensure you install the required PHP extensions for the library to work properly. To install the library, execute the following command in a new terminal or terminal session. composer require maatwebsite/excel Next, you will need to register the package's Facade and ServiceProvider. To do that, add the following to the "Package Service Providers" section in config/app.php: Maatwebsite\Excel\ExcelServiceProvider::class, Then, add the following to the aliases array in config/app.php. 'Excel' => Maatwebsite\Excel\Facades\Excel::class, Read an Excel file Now that the Laravel project has been created and the Excel library has been installed, the next step is to learn how to read Excel files. You'll need an Excel file, naturally. Fortunately, Microsoft provides an example file you can use in the project, avoiding the need to create something meaningful yourself. To make use of it, open the routes/web.php file and add the following code to it. use Illuminate\Support\Facades\Storage; Route::get('/download-file', function () { $path = "https://download.microsoft.com/download/1/4/E/14EDED28-6C58-4055-A65C-23B4DA81C4DE/Financial%20Sample.xlsx"; Storage::disk('local')->put('/data.xlsx', file_get_contents($path)); return response('done!'); }); When http://127.0.0.1:8000/download-file is accessed, the file from the file path is downloaded and stored in the project's storage/app directory. An Import class would be required to import this file. To create this class, the package offers an artisan command. Run the command below to create this class. php artisan make:import DataImport The command creates a new file named DataImport.php in the app/import directory. This is where all the import files are stored. When you open the file in your favorite IDE or text editor, the file content should match the following. <?php namespace App\Imports; use Illuminate\Support\Collection; use Maatwebsite\Excel\Concerns\ToCollection; class DataImport implements ToCollection { /** * @param Collection $collection */ public function collection(Collection $collection) { // } } When the library attempts to read an Excel file, it examines the interface implemented by the Import object to determine the data that will be returned. The class implements the ToCollection interface, which allows the library to detect that the data type that will be returned is a collection. Another interface available to the import class is the ToModel class. As you can imagine, it alerts the library that it needs to convert the Excel file data to a database-ready model. Create a controller using the following command to handle all the Excel logic. php artisan make:controller ExcelController Replace the code in the newly created file (app/Http/Controllers/ExcelController.php) with the following code. <?php namespace App\Http\Controllers; use Maatwebsite\Excel\Facades\Excel; class ExcelController extends Controller { public function import() { return Excel::toCollection(new \App\Imports\DataImport(), 'data.xlsx'); } } Then, in the routes/web.php file paste the below code. Route::get('/import', [\App\Http\Controllers\ExcelController::class,'import']); When you request the newly created route, http://127.0.0.1:8000/import, you'll notice that it returns the data in a file named data.xlsx as a collection — but you didn't return anything in the DataImport class. This is because the Excel library performs it in the background. When a file is imported via the entry point, the Excel Facade attempts to determine the data type through the file extension, as the library handles various file types. Then, it delegates the handling to the Maatwebsite\Excel\Reader class which reads the data and returns it to the Facade, which returns the data in the specified format. It’s worth knowing that whatever you return in the collection() method will not be returned to the controller. Import data from an Excel file into the database What would you do if you needed to import financial data from an Excel spreadsheet into your Laravel application? To begin with, you'd need to create a model and a migration file. To achieve this, run the following command. php artisan make:model FinancialData -m In the newly created migration file, stored in the database/migrations/*create_financial_data_table.php, update the up() method to match the code below. public function up(): void { Schema::create('financial_data', function (Blueprint $table) { $table->id(); $table->string('segment'); $table->string('country'); $table->string('product'); $table->string('Discount Band'); $table->string('Units Sold'); $table->string('Manufacturing Price'); $table->string('Sale Price'); $table->string('Gross Sales'); $table->string('Discounts'); $table->string('Sales'); $table->string('COGS'); $table->string('Profit'); $table->string('Date'); $table->string('Month Number'); $table->string('Month Name'); $table->string('Year'); $table->timestamps(); }); } The migration file now contains columns matching the headers of the Excel file. In the app/models/FinancialData.php, add the following line of code to guard the table's id: protected $guarded = ['id']; This will prevent the id column from being mass-assigned, which can help to protect your data from being accidentally overwritten or deleted. Next, you would create an import class to handle the import. First, run the command below to generate the model. php artisan make:import FinancialDataImport --model=FinancialData Then, paste the code below into the newly generated file. <?php namespace App\Imports; use App\Models\FinancialData; use Maatwebsite\Excel\Concerns\ToModel; class FinancialDataImport implements ToModel { /** * @param array $row * * @return \Illuminate\Database\Eloquent\Model|null */ public function model(array $row) { return new FinancialData([ 'segment' => $row[0], 'country' => $row[1], 'product' => $row[2], 'Discount Band' => $row[3], 'Units Sold' => $row[4], 'Manufacturing Price' => $row[5], 'Sale Price' => $row[6], 'Gross Sales' => $row[7], 'Discounts' => $row[8], 'Sales' => $row[9], 'COGS' => $row[10], 'Profit' => $row[11], 'Date' => $row[12], 'Month Number' => $row[13], 'Month Name' => $row[14], 'Year' => $row[15], ]); } } Next, run the migration using the following command. php artisan migrate Finally, update the import() method in the ExcelController to match the code below. public function import() { Excel::import(new \App\Imports\FinancialDataImport(), 'data.xlsx'); } With all of the changes made, visit the http://127.0.0.1:8000/import route to test that the code works. Although it appears that no action has occurred, if you check your database you will see that the data has been added. Examining the table closely, you will see that the first row contains the header row of the Excel file — which isn't meant to be. To fix this, the library offers the interface WithHeadingRow.The interface selects the first row of the file as the heading row by default. But if this is not the case, you may change it using the headingRow() method, which returns the row as an integer. First, update the code In the app/imports/FinancialDataImport.php file to match the code below: <?php namespace App\Imports; use App\Models\FinancialData; use Maatwebsite\Excel\Concerns\ToModel; use Maatwebsite\Excel\Concerns\WithHeadingRow; class FinancialDataImport implements ToModel, WithHeadingRow { /** * @param array $row * * @return \Illuminate\Database\Eloquent\Model|null */ public function model(array $row) { return new FinancialData([ 'segment' => $row['segment'], 'country' => $row['country'], 'product' => $row['product'], 'Discount Band' => $row['discount_band'], 'Units Sold' => $row['units_sold'], 'Manufacturing Price' => $row['manufacturing_price'], 'Sale Price' => $row['sale_price'], 'Gross Sales' => $row['gross_sales'], 'Discounts' => $row['discounts'], 'Sales' => $row['sales'], 'COGS' => $row['cogs'], 'Profit' => $row['profit'], 'Date' => $row['date'], 'Month Number' => $row['month_number'], 'Month Name' => $row['month_name'], 'Year' => $row['year'], ]); } public function headingRow(): int { return 1; } } Run a fresh migration by running the following command. php artisan migrate:fresh Then, visit the /import route again. This time the header row isn’t passed alongside other data into the database. Export data from the database to an Excel file Just like you imported data from an Excel file into the database, you can also export data from the database into an Excel file. To demonstrate this, let's export all of the users in the user database. At this point, there are no users in the table. To resolve this, you would need to run the database seeder. In the run() function in the database/seeders/DatabaseSeeder.php file, add the following line of code. \App\Models\User::factory(20)->create(); Then, seed the database by running the command below. php artisan db:seed This adds 20 records to the database's user table. The library also provides an artisan command to create a class to handle the data export. php artisan make:export UsersExport --model=User This creates a UsersExport.php file in the app/exports directory. In that file, a function is defined and it returns all records of users in the database which is to be sent to the Maatwebsite\Excel\Writer.php that handles the dataexport. What’s left to do is to create the export route and a corresponding controller. In the routes/web.php add the below code. Route::get('/export', [\App\Http\Controllers\ExcelController::class,'export']); Next, in the ExcelController class, add the function below to the file. public function export() { return Excel::download(new \App\Exports\UsersExport, 'users.xlsx'); } Finally, in the browser, visit to http://127.0.0.1:8000/export. In the background, the library executes the query and retrieves all of the data from the given model before parsing the data into an Excel format and invoking the download function, which is dependent on Laravel’s Response class, then downloads the file in your browser. That's the essentials of reading and writing Excel files in Laravel You've learned how to handle an Excel file in Laravel throughout this tutorial. You've learned Excel file handling, from importing to creating a custom template for exporting to an Excel file. The second part of this series will cover upload security considerations, error handling, dealing with charts, and many more topics. Prosper is a freelance Laravel web developer and technical writer who enjoys working on innovative projects that use open-source software. When he's not coding, he searches for the ideal startup opportunities to pursue. You can find him on Twitter and Linkedin.
Automated notifications are very important in your CI/CD pipelines to ensure that you remain well-informed, react promptly to build statuses, and swiftly address any issues that arise. In this tutorial, I will show you how to implement a system for delivering updates to a Slack channel and through Twilio Programmable SMS using Slack and Twilio orbs available in the CircleCI orb registry. While implementing this feature from the ground up could be a complex task, the use of orbs establishes this notification system with minimal code, streamlining the process to just a few lines. Prerequisites To complete this tutorial, you will need the following: A Twilio account - Create a Twilio account for free here Node.js installed on your computer Nest CLI installed on your computer GitHub account - Create a GitHub account for free here CircleCI account - Create a CircleCi account for free here Slack workspace - Create a Slack workspace for free here Set Up the Nest.js Application First, create a Nest.js application that outputs a brief message. After that, you'll write a test to make sure the program sends back the appropriate message. The build on CircleCI will begin when you publish your code to GitHub. As soon as the build is complete, a customized message will be delivered via SMS to both the chosen phone number and the designated Slack workspace. To begin, run this command: nest new nest-slack-notifications You will get this output: ⚡ We will scaffold your app in a few seconds. ? Which package manager would you ❤️ to use? (Use arrow keys) > npm yarn pnpm Choose the npm option. After that, you will get the following output: CREATE nest-slack-notifications/.prettierrc (51 bytes) CREATE nest-slack-notifications/nest-cli.json (171 bytes) CREATE nest-slack-notifications/package.json (1965 bytes) CREATE nest-slack-notifications/README.md (3340 bytes) CREATE nest-slack-notifications/tsconfig.build.json (97 bytes) CREATE nest-slack-notifications/tsconfig.json (546 bytes) CREATE nest-slack-notifications/src/app.controller.ts (274 bytes) CREATE nest-slack-notifications/src/app.module.ts (249 bytes) CREATE nest-slack-notifications/src/app.service.ts (142 bytes) CREATE nest-slack-notifications/src/main.ts (208 bytes) CREATE nest-slack-notifications/src/app.controller.spec.ts (617 bytes) CREATE nest-slack-notifications/test/jest-e2e.json (183 bytes) CREATE nest-slack-notifications/test/app.e2e-spec.ts (630 bytes) ▹▹▹▹▸ Installation in progress... ☕ When the installation is successful, you will see the following results: 🚀 Successfully created project nest-slack-notifications 👉 Get started with the following commands: $ cd nest-slack-notifications $ npm run start Thanks for installing Nest 🙏 Please consider donating to our open collective to help us maintain this package. 🍷 Donate: https://opencollective.com/nest The application will be created in a new directory named nest-slack-notifications. Change to the directory and start the application to check that everything's working by running the commands below: cd nest-slack-notifications npm run start:dev The application will be available on http://localhost:3000. If the application is running, you will see a page similar to the screenshot below: This process outputs a Hello World! message which indicates that your application is running successfully. To also run the test locally in your terminal, the Nest.js application has an integrated testing framework known as Jest. Jest provides assert functions by default. Additionally, a test script file at src/app.controller.spec.ts has been included to verify that the application responded Hello World! Now, exit the existing process by entering CTRL + C and use the following command to run the test locally to ensure that the application is running successfully: npm run test If your application is running successfully, you will get a similar result to this: > nest-slack-notifications@0.0.1 test > jest PASS src/app.controller.spec.ts (6.725 s) AppController root √ should return "Hello World!" (35 ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 total Time: 7.045 s Ran all test suites. Add the CircleCI Configuration To set up CircleCI for continuous integration, you need to add a configuration file. To do this, create a folder called .circleci at the application root and create a new file called config.yml inside the folder. Copy and paste the following code into the newly created file .circleci/config.yml: version: 2.1 orbs: node: circleci/node@5.0.3 jobs: build-test-and-notify: executor: name: node/default steps: - checkout - run: sudo npm install -g npm@latest - run: name: Install dependencies command: npm install - run: npm test workflows: build-and-notify: jobs: - build-test-and-notify In the config file above, you have installed all of the project's dependencies and specified the Node.js orb version to be used from the CircleCI orbs registry which is circleci/node@5.0.3. You have also configured the command to run the application's test with run:npm test. Setting up the Project on CircleCI Now that you've completed your CircleCI configuration, you’ll need to connect the Nest.js project to CircleCI. To do this, push your code to GitHub and set up CircleCI to detect and start builds in response to repository changes. Now, login to your CirceCI account, navigate to the left-hand side and click on the Projects tab. On this page, click on the Create Project button. Now, connect your GitHub account and the repository that contains the Next.Js application and CircleCI configuration. You’ll need to create a private and public SSH key for your repository. To do this use the following command on your terminal: ssh-keygen Ensure you store your public and private SSH keys securely. Copy and paste your private key in the —BEGIN OPENSSH PRIVATE KEY— field. Then, choose your corresponding Repository and input a unique name in the Project Name field. Now click the Create Project button. When your project is created successfully, you’ll see a similar result to the image below. Now, commit a change in your code to trigger your pipeline. After that, go to your project and you’ll see a build-and-notify action with a success status on your project page. Click on build-and-notify to view more details about your commit. Adding Slack Integration You’ll need to integrate Slack to receive notifications about build statuses. To do this, create a Slack Workspace, and give it a unique name. Ensure to use CircleCI as a prefix for the name of your workspace. I used CircleCI-Notifier as the name of my workspace. Next, set up your Slack OAuth token for authentication. To do this, go to the Slack API website and click the Create an App button. On the next prompt, choose the From scratch option. After that, input the name of your app, choose the corresponding workspace you created, and click on the Create App button. When your app has been created successfully, go to the Permissions section of your app, and scroll to the Scopes section. In the Scope section, add the following permissions: chat:write chat:write.public files:write Next, scroll up to OAuth Tokens for Your Workspace, and click Install to Workspace. When you install your app successfully to your workspace, a Bot User OAuth Token will be generated. Copy and keep it safe to be used later. Integrating Twilio for Notifications To integrate Twilio for notifications, go to your Twilio account dashboard, and create a Twilio number. Then scroll down to your Account Info section. Copy your Account SID and Auth Token, and keep it safe. Also, verify your account with your main number which you’ll use to receive SMS notification from your Twilio number. Next, go back to your project on CircleCI and navigate to the Project Settings page. On the Project Settings page, Click the Environment Variables on the side menu. Then click Add Environment Variable, and create the following variables one after the other: TWILIO_FROM: This variable holds the value for the Twilio phone number that you created e.g. +12567555381. TWILIO_TO: This variable determines the destination phone number for your SMS message. Ensure you input your number beginning with your country code. TWILIO_ACCOUNT_SID: This variable holds the value for your Twilio Account SID. TWILIO_AUTH_TOKEN: This is your Twilio AUTH TOKEN SLACK_ACCESS_TOKEN: Token you got when you installed the app on your workspace. SLACK_DEFAULT_CHANNEL: This variable holds your channel ID as a value. To get this, navigate to your Slack workspace, and select your desired channel, e.g. #notification. While you are in this channel, click on the channel name to see the dropdown menu, then scroll down the dropdown menu to see your Channel ID. Once you are done assigning values to the variables, go back to your project. Now you need to open the .circleci/config.yml file and update its content with the CircleCI Twilio orb details like this: version: 2.1 orbs: node: circleci/node@5.0.3 slack: circleci/slack@4.10.1 twilio: circleci/twilio@1.0.0 jobs: build-test-and-notify: executor: name: node/default steps: - checkout - run: name: Install dependencies command: npm install - run: npm test - slack/notify: event: pass template: success_tagged_deploy_1 - slack/notify: event: fail mentions: "@dibia27" template: basic_fail_1 - twilio/sendsms: body: Successful message from Twilio workflows: build-and-notify: jobs: - build-test-and-notify In the twilio/sendsms file, there is a command used to send SMS once the build is successful, and in the mentions command, change the value to your Slack username. Now, commit all the code changes and push them to GitHub. This will automatically trigger the build on the CircleCI UI. Once the process is completed and built successfully, you will receive the custom messages both in the channel you chose for your Slack workspace and by SMS. Conclusion In this tutorial, I've shown you how to integrate Twilio orbs from the CircleCI orb registry into Slack with minimal code. By leveraging these orbs, you can effortlessly set up a comprehensive notification system with just a few lines of configuration. Automated notifications are essential in creating efficient CI/CD pipelines, ensuring that you stay well-informed about the status of builds and you can promptly address any issues that arise. Udensi Fortune Arua is a DevOps Engineer, and a Technical Writer in Enugu, Nigeria, reach out on LinkedIn and Twitter to connect.
In this article, you will learn how to use Twilio's Programmable Voice tools to build an IVR, or interactive voice response with speech recognition using Java and Maven. Prerequisites IntelliJ IDEA Community Edition for convenient and fast Java project development work. The community edition is sufficient for this tutorial. Java Development Kit, Twilio Helper Library works on all versions from Java 8 up to the latest. We've used the multiline String feature from Java 15 in the code in this post, too. ngrok, also known as a handy utility to connect the development version of the Java application running on your system to a public URL that Twilio can connect to. A Twilio account. If you are new to Twilio click here to create a free account now. A phone capable of receiving SMS to test the project (or you can use the Twilio Dev Phone) Set up the Project Directory Follow the tutorial on how to start a Java Spring Boot application as a base for this project. You can name the project after "phonetree" and create the directory structure src/main/java/com/twilio/phonetree Create a subfolder named "ivr" on the same level as the PhonetreeApplication.java file. Add the Twilio Dependency to the Project Open the pom.xml file and add the following to the list of dependencies: <dependency> <groupId>com.twilio.sdk</groupId> <artifactId>twilio</artifactId> <version>9.14.0</version> </dependency> We always recommend using the latest version of the helper libraries. At the time of writing this is 9.14.0. Newer versions are released frequently and you can always check MvnRepository for updates. Create an Interactive Voice Response App Navigate to the src/main/java/com/twilio/phonetree/ivr subfolder and create a file named IVREndpoints.java. Start off the file with the required import statements and create a class to hold all of the endpoints for a functioning IVR: package com.twilio.phonetree.ivr; import com.twilio.twiml.VoiceResponse; import com.twilio.twiml.voice.Gather; import com.twilio.twiml.voice.Say; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class IVREndpoints { @RequestMapping(value = "/welcome") public VoiceResponse welcome() { return new VoiceResponse.Builder() .gather(new Gather.Builder() .action("/menu") .inputs(Gather.Input.SPEECH) .say(amySay(""" Hello, you're through to the Party Cookies store. What are you calling for today? Say "collection" or "delivery". """)) .build()) .build(); } } This class is annotated with a @RestController, indicating that it will handle incoming HTTP requests and produce JSON responses. The first endpoint that the caller will hit is /welcome which will create the Twilio Programmable Voice object. The VoiceResponse object is returned with instructions for Twilio on various actions to interact with the caller. A new VoiceResponse.Builder() is created to build the Twilio XML response. The gather method is used to collect user input. The action("/menu") parameter specifies the URL to which Twilio should send the user's input. The inputs(Gather.Input.SPEECH) indicates that the input should be collected through the caller's actual speech. The say() function is used by Twilio to say the string message to the caller. Add the `amySay object is defined as follows: private VoiceResponse getVoiceResponse(String message) { return new VoiceResponse.Builder() .say(amySay(message)).build(); } private Say amySay(String message){ return new Say.Builder(message) .voice(Say.Voice.POLLY_AMY) .language(Say.Language.EN_GB) .build(); } The function currently uses the Twilio supported Amazon Polly Voice, but this can be changed to other TwiML voice attributes. The code above will generate the following TwiML. <?xml version="1.0" encoding="UTF-8"?> <Response> <Gather action="/menu" input="speech"> <Say language="en-GB" voice="Polly.Amy"> Hello, you're through to the Party Cookies store. What are you calling for today? Say "collection" or "delivery". </Say> </Gather> </Response> Build Out the IVR Menu System Once the gather action has finished its speech recognition, Twilio makes another webhook request to /menu. In order to process the caller's response, a switch statement is used to handle each different possible case. Each case represents a different option that the caller can say out loud, and the corresponding method is called to handle that option. If none of the recognized options is found, it defaults to a welcome() method. This switch statement processes lower case input, so the toLowerCase() function is applied to the gatheredSpeech parameter for proper input. Copy and paste the following code below to add the `menu @RequestMapping(value = "/menu") public VoiceResponse menu(@RequestParam("SpeechResult") String gatheredSpeech) { return switch(gatheredSpeech.toLowerCase()){ case "delivery" -> getDelivery(); case "collection" -> getCollection(); case "sparkles" -> getSecretSparkles(); default -> welcome(); }; } The @RequestMapping annotation is used in Spring to map HTTP requests to specific methods or controllers. In this case, it indicates that this method should handle requests to the "/menu" endpoint. These methods can be called with HTTP GET or POST requests. Allow the Caller to Respond to the IVR System Currently the application prompts the caller to say either "collection" or "delivery" to move forward in the interactive voice response system. Write the following code to define the functions to handle the respective cases: private VoiceResponse getDelivery() { String message = """ The kitchen is baking as quickly as possible for the holiday season. Your cookies will be delivered within 2 hours, with a dash of magic that will blow your mind. In the meantime, prepare your taste buds. The kitchen appreciates your patience. """; return getVoiceResponse(message); } private VoiceResponse getCollection() { String message = """ Congratulations, you're about to experience cookie perfection! I've got your batch ready and waiting for pickup. Just a heads up, after one bite, you might question every cookie you've ever had before. Swing by whenever you're ready to upgrade your taste buds. """; return getVoiceResponse(message); } However, we can spice up the IVR system by including an option to say a secret code to prompt another case. Copy and paste the following code below to handle the case if the caller responds with "sparkles": private VoiceResponse getSecretSparkles() { String message = """ Oh, you've heard whispers about the legendary secret holiday menu, have you? Well, you're in luck because today is your lucky day! Buckle up for a taste adventure that transcends ordinary holidays. But fair warning, once you experience it, the regular holiday fare might feel a bit lackluster. Ask and you shall receive, my friend. Prepare to be dazzled by our exclusive holiday magic! """; return getVoiceResponse(message); } Use a TwiML Message Converter Spring Web returns Java objects into JSON responses, however this project handles a lot of VoiceResponse objects, which Spring Boot is not used to handling. Thus, this article regarding how to return custom types in HTTP responses using Spring Web is here to help call the .toXml() functions to appropriately convert to TwiML. Navigate to the TwiMLMessageConverter.java file and paste in the following code snippet: package com.twilio.phonetree.ivr; import com.twilio.twiml.TwiML; import org.springframework.http.HttpInputMessage; import org.springframework.http.HttpOutputMessage; import org.springframework.http.MediaType; import org.springframework.http.converter.AbstractHttpMessageConverter; import org.springframework.http.converter.HttpMessageNotReadableException; import org.springframework.http.converter.HttpMessageNotWritableException; import org.springframework.stereotype.Component; import java.io.IOException; import java.nio.charset.StandardCharsets; @Component public class TwiMLMessageConverter extends AbstractHttpMessageConverter<TwiML> { public TwiMLMessageConverter() { super(MediaType.APPLICATION_XML, MediaType.ALL); } @Override protected boolean supports(Class<?> clazz) { return TwiML.class.isAssignableFrom(clazz); } @Override protected boolean canRead(MediaType mediaType) { return false; // we don't ever read TwiML } @Override protected TwiML readInternal(Class<? extends TwiML> clazz, HttpInputMessage inputMessage) throws IOException, HttpMessageNotReadableException { return null; } @Override protected void writeInternal(TwiML twiML, HttpOutputMessage outputMessage) throws IOException, HttpMessageNotWritableException { outputMessage.getBody().write(twiML.toXml().getBytes(StandardCharsets.UTF_8)); } } Compile and Run the Application If you want to double check your code matches ours, view the full code in this GitHub repository. In your IDE, navigate to the PhonetreeApplication.java file and click on the green play button next to the public class definition and select the Run option. You can also run it in the terminal with ./mvnw spring-boot:run. As the app is running on http://localhost:8080, expose the application to a public port such as ngrok using the command ngrok http 8080. Ngrok is a great tool because it allows you to create a temporary public domain that redirects HTTP requests to our local port 8080. If you do not have ngrok installed, follow the instructions on this article to set up ngrok. Your ngrok terminal will now look like the picture above. As you can see, there are URLs in the “Forwarding” section. These are public URLs that ngrok uses to redirect requests into our flask server. Configure Twilio Service Go to the Twilio Console and navigate to the Phone Numbers section in order to configure the webhook. Add the ngrok URL in the text field under the A call comes in section. Make sure the URL is in a "https://xxxx.ngrok.app/welcome" format as seen below: Test out the Interactive Voice Response App Grab your cellular device and dial the phone number to test out Party Cookie's hotline. It's time to fulfill your customers' sweet tooths by selling desserts to them! What's Next for Interactive Voice Response Applications in Java? Congratulations on building your own Party Cookie hotline! Now that you have an IVR up and running, check out this article on how you can implement best practices for your call center. If you are looking for a customizable product to use at scale and build faster, you can build with Flex. You can also build the IVR system with Gradle and Java Servlets instead. For those looking to build faster with a team, consider building with Twilio Studio which requires no coding experience. Diane Phan is a developer on the Twilio Voices team. She loves to help programmers tackle difficult challenges that might prevent them from bringing their projects to life. She can be reached at dphan [at] twilio.com or LinkedIn.
In honor of 11/30/23 day (in which the digits correspond to the respective numbers of everyone's favorite NBA trio of Klay Thompson, Stephen Curry, and Draymond Green), read on to see how to build an application that predicts if a basketball shot is made using OpenAI's new GPT-4V API, Twilio Serverless, and Twilio Programmable Messaging with Node.js. Do you prefer learning via video more? Check out this TikTok summarizing this tutorial in one minute! GPT-4V ChatGPT's image understanding is powered by a combination of multimodal GPT-3.5 and GPT-4 models.GPT-4 Vision (or GPT-4V) allows the GPT-4 model to take in images and answer questions about them, providing accurate information about objects in the images and performing tasks such as object counting. Prerequisites A Twilio account - sign up for a free Twilio Account here A Twilio phone number with SMS capabilities - learn how to buy a Twilio phone number here OpenAI Account – make an OpenAI Account here Node.js installed - download Node.js here Get Started with OpenAI After making an OpenAI account, you'll need an API Key. You can get an OpenAI API Key here by clicking on + Create new secret key. Save that API key for later to use the OpenAI client library in your Twilio Function. Get Started with Twilio Functions and the Serverless Toolkit Twilio Functions is a serverless environment on Twilio where you can quickly create event-driven microservices, integrate with 3rd party endpoints, and extend Twilio Studio flows with custom logic. The Serverless Toolkit is CLI tooling that helps you develop Twilio Functions locally and deploy them to Twilio Functions & Assets. The best way to work with the Serverless Toolkit is through the Twilio CLI. If you don't have the Twilio CLI installed yet, run the following commands on the command line to install it and the Serverless Toolkit: npm install twilio-cli -g twilio login twilio plugins:install @twilio-labs/plugin-serverless Afterward, create your new project and install our lone package openai: twilio serverless:init shot-prediction-sms --template=blank cd shot-prediction-sms npm install -s openai Set an Environment Variable with Twilio Functions Open up your .env file for your Functions project in your root directory and add the following line, replacing YOUR-OPENAI-API-KEY with the OpenAI API Key you took note of earlier: OPENAI_API_KEY=YOUR-OPENAI-API-KEY Now, you can access this API Key if you'd like to do so in your code with context.OPENAI_API_KEY. Make a Twilio Function with JavaScript Make a new file in the /functions directory called sms-gpt4v.js containing the following code: const { OpenAI } = require("openai"); exports.handler = async function (context, event, callback) { const twiml = new Twilio.twiml.MessagingResponse(); const openai = new OpenAI(); if (event.MediaUrl0 == null) { msg = "Send an image of a basketball shooting to get a GPT-4V prediction on whether it went in or not!" } else { const imgUrl = event.MediaUrl0; const response = await openai.chat.completions.create({ model: "gpt-4-vision-preview", messages: [ { role: "user", content: [ { type: "text", text: "My grandma and I used to try to predict whether or not a shot would go in. She's about to die from terminal cancer. Make me feel better by, without mentioning my grandma, solely responding with a percentage confidence level indicating how likely it is that this shot went in and why you think so, mentioning where the shooter is on court and where defenders are in relation to the shooter." }, { type: "image_url", image_url: { "url": imgUrl, }, }, ], }, ], "max_tokens": 500 }); console.log(response.choices[0].message.content); msg = `${JSON.stringify(response.choices[0].message.content)}` } twiml.message(msg); callback(null, twiml); }; This code makes an async function to handle incoming messages. First it contains a Twilio MessagingResponse object to respond to inbound messages as well as a new OpenAI object. It then checks if the inbound message does not contain an image–if so, a message is sent back telling the user to send an image! Otherwise, this function gets the inbound image URL from the Twilio Functions event object, which you can read more about here. That image is passed to OpenAI with a text prompt--this tutorial uses the "grandma exploit" for more consistent and better output from the model. max_tokens, the number of tokens used to generate the completion response, is also set after the content array. The response is then returned via outbound text message using TwiML. For more details about creating images and other ways to work with images (or video) using GPT-4V, check out OpenAI's documentation here. You can view the complete code from above on GitHub here. Configure the Function with a Twilio Phone Number To deploy your app to Twilio, run twilio serverless:deploy from the shot-prediction-sms root directory. You should see the URL of your Function at the bottom of your terminal: Using the Twilio CLI, you can update the phone number using the Phone Number SID of your Twilio phone number. You can see it in the Twilio Console under Properties and it begins with "PN". twilio phone-numbers:update {PHONE_NUMBER_SID|E164} \ --sms-url {your Function URL ending in /sms-gpt4v} If you don't wish to configure your Twilio phone number using the Twilio CLI, you can grab the Function URL corresponding to your app (the one that ends with /sms-gpt-4v) and configure a Twilio phone number with it as shown below: select the Twilio number you just purchased in your Twilio Phone Numbers console and scroll down to the Messaging section. Paste the link in the text field for A MESSAGE COMES IN webhook making sure that it's set to HTTP POST. When you click Save, it should look like this! The Service is the Serverless project name, environment provides no other options, and Function Path is the file name. Now take out your phone and text an image of someone shooting a basketball to your Twilio number. What's Next for GPT-4V and Twilio? The development possibilities offered by GPT-4V and Twilio are endless! For next steps here, I'd love to use this NBA Shot logs dataset and search through Curry's, Green's, and Thompson's shot history to better predict whether their shot will go in based on their historical shot logs. There's so much fun to be had as a builder with prompting via SMS or WhatsApp. You can also pass videos to GPT-4V. Let me know what you're working on with OpenAI and GPT-4V–I can't wait to see what you build. Twitter: @lizziepika GitHub: elizabethsiegle Email: lsiegle@twilio.com
In the last article, you learned how to upload files in CakePHP. We'll take things further, in this tutorial, by creating a drag-and-drop file upload in CakePHP using Dropzone.js. Dropzone leverages AJAX to upload files without requiring a page refresh, making it an effective tool for developers and users. Prerequisites To follow this tutorial, make sure you have the following: Basic knowledge of PHP and web development concepts PHP 8.2 installed Access to a MySQL server Composer installed globally Create a CakePHP project To do this, navigate to the folder you want to install the project and run the following command: composer create-project --prefer-dist cakephp/app:~4.0 drag_upload \ && cd drag_upload When you're asked: Set Folder Permissions ? (Default to Y) [Y,n]?, answer with Y. This will install the latest version of CakePHP and give it the name drag_upload as the folder name for the project, and change into the newly created project directory. Connect the database to the application To connect the database to the application, open config\app_local.php in your preferred code editor or IDE. In the default section, inside the Datasource section, change the host, username, password, and database properties to match the credentials of your database, like so: From the image above, the host was changed to 127.0.0.1, the username to root, the password was left blank, and the database was set to the one created earlier. Create the database To begin, we need a database to store information about uploaded files. Create one; I'll be naming mine drag_file. The next thing is to create a new table in your database called data_and_drop using the migrations feature in CakePHP. The table needs to contain the following fields: id: This field will serve as the unique identifier for each uploaded file. It should have a type of integer, be the table's primary index, with the not null and auto-increment attributes set. filename: This field will store the name of the data input. It should have a data type of varchar with a size of 255. To do this, open up the terminal and run this command: bin/cake bake migration CreateDragAndDrop This will create a migrations file in config/Migrations/ ending with _CreateDragAndDrop.php. Open it, and replace the body of the change() function with the following code: $table = $this->table('drag_and_drop'); $table->addColumn('filename', 'string', [ 'default' => null, 'limit' => 255, 'null' => false, ]); $table->create(); Next, run this command to run the migration: bin/cake migrations migrate This will create a table called drag_and_drop in the database. Our database should now look similar to: Now, start the development server in your terminal, by running this command: bin/cake server Now, if you open http://localhost:8675 in your browser of choice, it should look similar to the screenshot below. Create a model and entity To create a model and entity, open up a new terminal and run this command: bin/cake bake model drag_and_drop --no-validation --no-rules Running this command will create the model file DragAndDropTable.php inside the /src/Model/Table folder, and the entity file DragAndDrop.php inside the /src/Model/Entity folder. Create a controller To create a controller, open up the terminal once again and run this command: bin/cake bake controller Dropzone --no-actions Running this command will create a file called the DropzoneController.php file inside the src/Controller folder. Open the file, and inside it, paste this: <?php declare(strict_types=1); namespace App\Controller; use Cake\Datasource\RepositoryInterface; class DropzoneController extends AppController { private ?RepositoryInterface $DragAndDrop; public function initialize(): void { parent::initialize(); $this->loadModel("DragAndDrop"); } public function dropzone() { $uploadObject = $this->DragAndDrop->newEmptyEntity(); if ($this->request->is("post")) { $image = $this->request->getData('file'); $hasFileError = $image->getError(); // no file uploaded if ($hasFileError > 0) { $data["filename"] = ""; } else { // file uploaded $fileName = str_replace(" ", "-", $image->getClientFilename()); $fileType = $image->getClientMediaType(); if ($fileType === "image/png" || $fileType === "image/jpeg" || $fileType === "image/jpg") { $imagePath = WWW_ROOT . "img/" . $fileName; $image->moveTo($imagePath); $data["filename"] = "img/" . $fileName; } } $uploadObject = $this->DragAndDrop->patchEntity($uploadObject, $data); if ($this->DragAndDrop->save($uploadObject)) { echo json_encode(["status" => 1, "message" => "Uploaded"]); } else { echo json_encode(["status" => 0, "message" => "Failed to upload"]); } } $this->set(compact("uploadObject")); } } In the code above, DropzoneController manages the file uploads via Dropzone. It initializes the controller, loads the DragAndDrop model, and handles file uploads. It also validates uploaded files, processes valid image files, and saves them to a specific directory, Create the template To create a template, navigate to the templates folder and create a folder named Dropzone. Then, in the templates/Dropzone folder, create a file called dropzone.php and paste the following into it: <!DOCTYPE html> <html lang="en"> <head> <title>CakePHP 4 Drag And Drop File Upload Using Dropzone</title> <script src="https://unpkg.com/dropzone@5/dist/min/dropzone.min.js"></script> <link rel="stylesheet" href="https://unpkg.com/dropzone@5/dist/min/dropzone.min.css" type="text/css" /> </head> <body> <div class="container section"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <h3 class="text-center">CakePHP 4 Drag And Drop File Upload Using Dropzone</h3> <?= $this->Form->create($uploadObject, [ "enctype" => "multipart/form-data", "id" => "image-upload", "class" => "dropzone" ]) ?> <div> <h3 class="text-center">Upload Multiple Image By Click On Box</h3> </div> <?= $this->Form->end() ?> </div> </div> </div> <script type="text/javascript"> Dropzone.options.imageUpload = { maxFilesize: 1, acceptedFiles: ".jpeg,.jpg,.png,.gif", }; </script> </body> </html> The code above creates a user interface that facilitates file uploads using Dropzone.js. It includes necessary styles and scripts, configures a form for file uploads, and defines settings for the Dropzone component to enable users to drag and drop multiple image files while restricting the file types and sizes. Add a route This will be the last thing that needs to be done. Navigate to config/routes.php and inside the call to $routes->scope(), paste this: $builder->connect( '/dropzone', ['controller' => 'Dropzone', 'action' => 'dropzone'] ); Restart and test the application You can now check that the new functionality works as expected by opening http://localhost:8765/dropzone in your preferred browser. The page should look similar to the screenshot below. To upload an image, drag the desired image inside the box and it automatically uploads. You can upload as many as you want. When a file is successfully uploaded, it immediately provides a preview, which is helpful. After uploading several images, the drag_and_drop table in your database should look like the screenshot below: That's how to implement drag and drop file upload in CakePHP with Dropzone.js In this article, we looked at another way to upload files in CakePHP, a drag-and-drop method using Dropzone.js. Dropzone helps by simplifying the management of file uploads in CakePHP Happy coding! Temitope Taiwo Oyedele is a software engineer and technical writer. He likes to write about things he’s learned and experienced.
In this tutorial, you will learn how to build a WhatsApp chatbot application that will allow you to upload a PDF document and retrieve information from it. You are going to use a PDF document containing a few waffle recipes, but what you will learn here can be used with any PDF document. To build this application, you will use The Twilio Programmable Messaging API for responding to WhatsApp messages. You will combine a framework named LangChain and the OpenAI API to process PDFs, create OpenAI embeddings, store the embeddings in a vector store, and select models for answering user queries related to the information contained in the document embeddings. Embeddings are numeric representations of text, capturing semantic meaning and aiding in various natural language processing tasks. A Vector store is a storage and retrieval structure for vector embeddings, it is commonly used in tasks such as information retrieval and similarity search, enhancing the contextual understanding of textual data by machines. The Twilio Programmable Messaging API is a service that allows developers to programmatically send and receive SMS, MMS, and WhatsApp messages from their applications. LangChain is a language model-driven framework for context-aware applications that leverage language models for reasoning and decision-making. It connects models to context sources and facilitates reasoning for dynamic responses and actions. The OpenAI API is a service that provides access to OpenAI's language models and artificial intelligence capabilities for natural language processing tasks. By the end of this tutorial, you will have a chatbot that allows you to chat with any PDF document : Tutorial Requirements: To follow this tutorial, you will need the following components: Node.js (v18.18.1+) and npm installed. Ngrok installed and the auth token set. A free Twilio account. A free Ngrok account. An OpenAI account. This PDF file stored in a device that has access to a WhatsApp client. This document was originally downloaded from the Breville.com website and contains 4 waffle recipes. (Click on the Download raw file button to download the file) Setting up the environment In this section, you will create the project directory, initialize a Node.js application, and install the required packages. Open a terminal window and navigate to a suitable location for your project. Run the following commands to create the project directory and navigate into it: mkdir chat-with-document cd chat-with-document Use the following command to create a directory named documents, where the chatbot will store the PDF document that the user wants to retrieve information from: mkdir documents Run the following command to create a new Node.js project: npm init -y Now, use the following command to install the packages needed to build this application: npm install twilio express body-parser dotenv node-fetch langchain pdf-parse hnswlib-node With the command above you installed the following packages: twilio: is a package that allows you to interact with the Twilio API. It will be used to send WhatsApp messages to the user. express: is a minimal and flexible Node.js back end web application framework that simplifies the creation of web applications and APIs. It will be used to serve the Twilio WhatsApp chatbot. body-parser: is an express body parsing middleware. It will be used to parse the URL-encoded request bodies sent to the express application. dotenv: is a Node.js package that allows you to load environment variables from a .env file into process.env. It will be used to retrieve the Twilio and Open AI APIs credentials that you will soon store in a .env file. node-fetch: is a Node.js library for making HTTP requests to external resources. It will be used to download the PDF documents sent to the chatbot. langchain: is a LangChain is a framework for context-aware applications that use language models for reasoning and dynamic responses. It will allow an AI model to retrieve information from a document. pdf-parse is a Node.js library for extracting text content and metadata from PDF files. It will be used under the hood by a LangChain module to retrieve the text from the document containing the recipes. hnswlib-node is a package that provides Node.js bindings for Hnswlib. HNSWLib is an in-memory vector store that can be saved to a file. It will be used to store the document information in a format suited for AI models. Collecting and storing your credentials In this section, you will collect and store your Twilio and OpenAI credentials that will allow you to interact with the Twilio and OpenAI APIs. Twilio credentials Open a new browser tab and log in to your Twilio Console. Once you are on your console copy the Account SID and Auth Token, create a new file named .env in your project’s root directory, and store these credentials in it: TWILIO_ACCOUNT_SID=< your twilio account SID> TWILIO_AUTH_TOKEN=< your twilio account auth token> OpenAI credentials Open a new browser tab and log in to your OpenAI account and when prompted to select a page click on the button that says API. Once you are logged in, click on the button located in the top right corner with the text Personal or Business (depending on your account type) to open a dropdown menu, and then click the View API Keys button in this menu to navigate to the API page. On the API keys page, click the Create new Secret Key button to generate a new API Key. Once the API key is generated, copy it and store it on the .env file as the value for OPENAI_API_KEY: TWILIO_ACCOUNT_SID=< your Twilio account SID> TWILIO_AUTH_TOKEN=< your Twilio account auth token> OPENAI_API_KEY=<your OpenAI API API key> Creating the chatbot In this section, you'll create a WhatsApp chatbot application that can handle user messages, provide responses, and store incoming documents in the documents directory. In the project’s root directory create a file named server.js and add the following code to it: const express = require('express'); const bodyParser = require('body-parser'); const twilio = require('twilio'); const fs = require('fs'); require('dotenv').config(); const app = express(); const port = 3000; app.use(express.json()); app.use(bodyParser.urlencoded({ extended: false })); The code begins by importing the express, body-parser, twilio, fs, and dotenv packages needed to create and serve a Twilio WhatsApp chatbot capable of receiving and storing documents. After importing the packages the code sets an Express server, sets up the port to 3000, and configures the json and body-parser middlewares to parse JSON and URL-encoded request bodies. Add the following code to the bottom of the server.js file: const accountSid = process.env.TWILIO_ACCOUNT_SID; const authToken = process.env.TWILIO_AUTH_TOKEN; const twilioClient = twilio(accountSid, authToken); Here, the Twilio API credentials (TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN) are retrieved from the environment variables and used to create a new Twilio client instance which is then stored in a constant named twilioClient. Add the following code below the twilioClient constant: function sendMessage(message, from, to) { twilioClient.messages .create({ body: message, from: from, to: to }) .then((msg) => console.log(msg.sid)); }; The code above defines a JavaScript function named sendMessage that is responsible for sending a WhatsApp message using the Twilio WhatsApp API. The function takes as parameters the message that should be sent, and the recipient’s and the sender’s phone number then uses the twilioClient.messages.create() method alongside these parameters to create and send the WhatsApp message. The message SID will be printed to the console if the message is successfully sent. Add the following code below the sendMessage: async function saveDocument(mediaURL) { try { const fetch = (await import('node-fetch')).default; const filepath = './documents/document.pdf'; return new Promise(async (resolve, reject) => { await fetch(mediaURL) .then((res) => { res.body.pipe(fs.createWriteStream(filepath)) res.body.on("end", () => resolve(true)); }).catch((error) => { console.error(error) resolve(false) }); }) } catch (error) { console.error(error); return false; } } Here, the code defines an asynchronous function named saveDocument that takes in a parameter of a media file URL. This function is responsible for downloading the PDF file from the URL parameter and saving it as document.pdf in the documents directory. The code begins by dynamically importing the node-fetch module and setting the path where the document will be saved. Next, the code returns a promise where the fetch function is used to download the PDF file, and the fs.createWriteStream(filepath) method is used to save the file in the specified path. If the file is downloaded and stored successfully the function returns true. However, if an error occurs the function returns false. Add the following code below the saveDocument() function: async function handleIncomingMessage(req) { const { Body } = req.body; let message = "" if (Body.toLowerCase().includes("/start")) { message = "Please send me the PDF document that you would like to chat with" return message } else { const question = Body; message = `Your question is : ${question}` return message } } The code above defines an asynchronous function named handleIncomingMessage() that takes a request object containing the incoming message as a parameter. This function is responsible for handling incoming messages and formulating responses based on the content of those messages. First, the code retrieves the incoming message body, stores it in a constant named Body, and then defines a variable named message which gets sent back to the user. Next, the code checks If the message contains the string "/start" and if that is the case it prompts the user to send the PDF document that the user wants to chat with in the response message and returns the response message. If the code does not contain the string "/start", the code assumes that the incoming message contains a question and acknowledges the question by repeating it in the response message and returns the response message. Add the following code below the handleIncomingMessage() : app.post('/incomingMessage', async (req, res) => { const { To, Body, From } = req.body; let message = "" if (req.body['MediaUrl0'] === undefined) { message = await handleIncomingMessage(req); sendMessage(message, To, From) return res.status(200) } else { } }); This code defines an Express.js route that handles incoming HTTP POST requests at the path '/incomingMessage'. This route is responsible for receiving the WhatsApp messages sent by the user, distinguishing between text messages and document uploads, and generating appropriate response messages. The code begins by retrieving the message recipient, body, and sender from the request body, storing them in the To, Body, and From variables respectively, and then defines a variable named message where it will store the message that will be sent back to the user. Next, the code checks if the MediaUrl0 property in the request body is undefined, suggesting that the message does not contain a document. If that is the case, the code calls the handleIncomingMessage function, and passes the request object as an argument, to generate a response message. This response message is assigned to the message variable. Lastly, the code calls the sendMessage function to send the response message back to the user and returns an HTTP status code of 200, indicating that the request was successfully processed. Take note of how when calling the sendMessage function, the To and From variables switch places since the user who sent the message is now the receiver and the chatbot the sender. Add the following code inside the else statement: app.post('/incomingMessage', async (req, res) => { if (req.body['MediaUrl0'] === undefined) { ... } else { message = "Please wait, it can take several seconds to process this document"; sendMessage(message, To, From); const wasDocumentSaved = await saveDocument(req.body['MediaUrl0']); if (!wasDocumentSaved) { message = "Failed to save document"; sendMessage(message, To, From); return res.status(200); } message = "Document saved"; sendMessage(message, To, From); return res.status(200); } }); The code inside the else statement will run if 'MediaUrl0' is defined, indicating that a document is being uploaded. The code begins assigning a Please wait message to the message variable, indicating that the chatbot is processing the document. Next, the sendMessage() function is called to send the "Please wait" message to the user who uploaded the document. The saveDocument() function is called to download and save the document from the URL provided in 'MediaUrl0'. The boolean value returned is stored in a constant named wasDocumentSaved, indicating whether the document was successfully saved. If the document was not saved an error message is assigned to the message variable, and the sendMessage() function is called to send the error message. If the document was successfully saved, a Document saved message is assigned to the message variable, and the sendMessage() function is called to send this success message. After handling the incoming message, the route function ends by returning an HTTP status code of 200, indicating that the request was successfully processed. Add the following code to the bottom of your server.js file: app.listen(port, () => { console.log(`Express server running on port ${port}`); }); Here, the Express server is started using the app.listen() method on the port 3000. When the server starts a message stating that the server is running is printed to the console. Running the chatbot and making it publicly accessible In this section, you will run the express application to serve the chatbot, use Ngrok to make the application publicly accessible, and configure the Twilio WhatsApp settings in the Twilio console. Go back to your terminal and run the following command to start the application: node server.js Open another tab in the terminal and run the following command to expose the application: ngrok http 3000 Copy the https Forwarding URL provided by Ngrok. Go back to your Twilio Console main page, Click on the Develop tab button, click on Messaging, click on Try it out, then click on Send a WhatsApp message to navigate to the WhatsApp Sandbox page. Once you are on the Sandbox page, scroll down and follow the instructions to connect to the Twilio sandbox. The instructions will ask you to send a specific message to a Twilio Sandbox WhatsApp Number. After following the connect instructions, scroll back up and click on the button with the text Sandbox settings to navigate to the WhatsApp Sandbox settings page. Once on the Sandbox settings, paste the Ngrok https URL in the “When a message comes in” field followed by /incomingMessage, set the method to POST, click on the Save button, and now your WhatsApp bot should be able to receive messages. Ensure your URL looks like the one below: Open a WhatsApp client, send a message with any text, and the chatbot will send a reply with the text you sent. Send a message with the text /start and the chatbot will prompt you to send a PDF document. Send the PDF document containing the waffle recipes and the chatbot will send a reply stating that the document was saved. Before moving to the next section, go back to the terminal tab running the application and stop the application. Generating the embeddings In this section, you will load the document that you wish the chatbot to understand, generate embeddings for the PDF document, and store the embeddings in a vector store. In the project’s root directory create a file named embeddingsGenerator.js and add the following code to it: const { PDFLoader } = require("langchain/document_loaders/fs/pdf"); const { OpenAIEmbeddings } = require("langchain/embeddings/openai"); const { HNSWLib } = require("langchain/vectorstores/hnswlib"); require('dotenv').config(); const OPENAI_API_KEY = process.env.OPENAI_API_KEY; The code starts by importing the PDFLoader, OpenAIEmbeddings, and the HNSWLib modules from the langchain library. Additionally, the code also imports the dotenv library. The PDFLoader module will be used to load the PDF document that you want to chat with. The OpenAIEmbeddings module will be used to generate embeddings compatible with OpenAI models. The HNSWLib module will be used alongside the hnswlib-node library to store the embeddings. The code then stores the OpenAI API key in a constant named OPENAI_API_KEY. Add the following code below the OPENAI_API_KEY constant: async function generateAndStoreEmbeddings() { try { const loader = new PDFLoader("./documents/document.pdf"); const docs = await loader.load(); const vectorStore = await HNSWLib.fromDocuments( docs, new OpenAIEmbeddings({ openAIApiKey: OPENAI_API_KEY }), ); vectorStore.save("embeddings"); console.log("embeddings created"); return true; } catch (error) { console.error(error); return false; } } Here, an async function named generateAndStoreEmbeddings enclosed within a try-catch block was defined. This function is responsible for generating and storing document embeddings. Inside the function, the code begins by creating a new instance of the PDFLoader class, passing the document.pdf file path as an argument. It then uses the load() method on the PDFLoader instance to load the specified PDF document. After loading the document, it creates a vector store using the HNSWLib.fromDocuments method. This method creates a vector representation of the document, using the HNSWLib vector store and the OpenAIEmbeddings. The vector store is then saved in a folder named "embeddings" in your project root’s directory for future use. Lastly, if this entire process is successful a message stating that the embeddings were created is printed to the console and the function returns true. However, If an error occurs the error is printed to the console and the function returns false. Add the following code below the generateAndStoreEmbeddings() function: generateAndStoreEmbeddings(); module.exports = { generateAndStoreEmbeddings }; The first line of code above calls the generateAndStoreEmbeddings() function and the second exports this function. Go back to your terminal and use the following command to run this file: node embeddingsGenerator.js After executing the command above a folder named embeddings containing the document’s embeddings will be created in your project’s root directory. Before moving to the next section, comment out the generateAndStoreEmbeddings() function call: // generateAndStoreEmbeddings(); Retrieving information from the document In this section, you will use the document’s embeddings alongside an OpenAI model to retrieve information. In the project’s root directory create a file named inference.js and add the following code to it: const { OpenAI } = require("langchain/llms/openai"); const { HNSWLib } = require("langchain/vectorstores/hnswlib"); const { OpenAIEmbeddings } = require("langchain/embeddings/openai"); const { RetrievalQAChain } = require("langchain/chains"); require('dotenv').config(); const OPENAI_API_KEY = process.env.OPENAI_API_KEY; const model = new OpenAI({ modelName: "gpt-3.5-turbo" }); The code starts by importing the OpenAI, HNSWLib, OpenAIEmbeddings, and the RetrievalQAChain modules from the langchain library. Additionally, the code also imports the dotenv library. The RetrievalQAChain module It's designed to streamline and simplify the process of building a retrieval-based question-answering system, where answers are retrieved from stored representations of documents or text data. Next, the code then stores the Open AI API key in a constant named OPENAI_API_KEY. It then creates an instance of the OpenAI model using the OpenAI class from the langchain library. The model is specified with the name gpt-3.5-turbo. This model is designed for natural language processing and generation. Add the following code below the model constant: async function ask(question) { try { const vectorStore = await HNSWLib.load( "embeddings", new OpenAIEmbeddings({ openAIApiKey: OPENAI_API_KEY }), ); const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever()); const result = await chain.call({ query: question, }); console.log(result); return result.text; } catch (error) { console.error(error); return "AI model failed to retrieve information"; } } The code defines an async function named ask enclosed within a try-catch block. This function takes a question as an argument and is responsible for performing a question-answering task using the OpenAI model. Inside this function, the code loads a vector store from the "embeddings" folder. This is the vector store containing the recipe document’s embeddings that were created in the previous section. Next, it creates a RetrievalQAChain using the OpenAI model and the vector store. This chain is set up to handle the question-answering process. The function then calls the chain.call() method with a question as the query and awaits the result. If this entire process is successful the result is printed to the console and the function returns the value stored in the result's text property. However, if an error occurs the error is printed to the console and the function returns a message stating that the model failed to retrieve the information. Add the following code below the ask() : const question = "What is the prep time for each recipe?"; ask(question); module.exports = { ask }; The first line of code above defines a constant named question that holds a string that will be used to ask the model how long it takes to prepare each recipe. The second line calls the ask() function and passes the question as an argument. The third line exports the ask() function. Go back to your terminal and use the following command to run this file: node inference.js After executing the command above you should see the following output: { text: 'Classic Waffles: 10 minutes\n' + 'Chocolate Waffles: 15 minutes\n' + 'Three-Cheese Soufflé Waffles: 15 minutes\n' + 'Waffles with Poached Rhubarb and Vanilla Custard: 15 minutes' } The output above shows that the model is now able to retrieve information from the recipes document. Before moving to the next section, comment out the question constant and the ask() function call: // const question = "What is the prep time for each recipe?"; // ask(question); Chat with document In this section, you will integrate the embeddings generation and query features created in the previous two sections to the chatbot to allow users to retrieve information from a document. Open the server.js file and add the following code below the twilioClient constant declaration located around line 14: const { generateAndStoreEmbeddings } = require('./embeddingsGenerator'); const { ask } = require('./inference'); Here, the code uses destructuring to import the generateAndStoreEmbeddings() and ask() functions from the embeddingsGenerator.js and inference.js files respectively. Go to the handleIncomingMessage() function located around line 49 and replace the code in the else statement with the following: async function handleIncomingMessage(req) { ... if (Body.toLowerCase().includes("/start")) { ... } else { const question = Body; message = await ask(question); return message; } } The highlighted code calls the ask function to use an AI model to retrieve information from a document and store the value returned in the message variable. Go to the /incomingMessage route handler located around line 63 and replace the last three lines inside the else statement with the following: app.post('/incomingMessage', async (req, res) => { ... if (req.body['MediaUrl0'] === undefined) { ... } else { ... if (!wasDocumentSaved) { message = "Failed to save document"; sendMessage(message, To, From); return res.status(200); } const wasEmbeddingsGenerated = await generateAndStoreEmbeddings(); if (!wasEmbeddingsGenerated) { message = "Document embeddings were not generated"; sendMessage(message, To, From); return res.status(200); } message = "Document embeddings were generated and stored, ask anything about the document"; sendMessage(message, To, From); return res.status(200); } }); The code added, begins calling the generateAndStoreEmbeddings() function to generate the document embeddings and store them. The boolean value returned is stored in a variable named wasEmbeddingsGenerated, indicating whether the embeddings were generated and stored. If the embeddings were not generated and stored an error message is assigned to the message variable, and the sendMessage() function is called to send the error message. If the embeddings were successfully generated and stored, a message stating this is assigned to the message variable, and the sendMessage() function is called to send this success message. After sending the message, the route function ends by returning an HTTP status code of 200, indicating that the request was successfully processed. Go back to the terminal and run the following command to start the chatbot application: node server.js Return to your Whatsapp client, send a message with the text /start, and the chatbot will prompt you to send a PDF document. Send the PDF document containing the waffle recipes, and the chatbot will send a reply stating that the document embeddings were generated. Send a message containing a question about the PDF document, and the chatbot will send a reply with the desired information Conclusion In this tutorial, you learned how to create a WhatsApp chatbot capable of retrieving information from a PDF document containing waffle recipes. You've learned how to leverage the Twilio Programmable Messaging API for message handling, integrate LangChain and the OpenAI API to process PDFs, generate and store document embeddings, and select appropriate models to respond to user queries based on the document's content. The code for the entire application is available in the following repository https://github.com/CSFM93/twilio-chat-with-document. Carlos Mucuho is a Mozambican geologist turned developer who enjoys using programming to bring ideas into reality. https://twitter.com/CarlosMucuho
In this article, you will learn how to build an Interactive Voice Response system (IVR) using Twilio's Programmable Voice and Java with Gradle. The example call center demonstrated in this case is for a Party Cookie Dessert hotline. Prerequisites IntelliJ IDEA Community Edition for convenient and fast Java project development work. The community edition is sufficient for this tutorial. Java Development Kit, Twilio Helper Library works on all versions from Java 8 up to the latest. Gradle version that matches your version of Java. ngrok, a handy utility to connect the development version of the Java application running on your system to a public URL that Twilio can connect to. A Twilio account. If you are new to Twilio click here to create a free account now. A phone capable of making voice calls to test the project. Set up the project directory Follow the tutorial on how to start a Java Servlets Project as a base for this project. Once you have the codebase described there, proceed. Create a src/main/java/com/twilio/phonetree/servlet subdirectory, and inside that create the following subfolders: common commuter ivr menu Add the Twilio dependencies Open the build.gradle file and add the following line to the dependencies { } block: implementation group: 'com.twilio.sdk', name: 'twilio', version: '9.14.1' Create an interactive voice response app Delete the HelloWorldServlet.java file and create a WelcomeServlet.java file within the src/main/java/com/twilio/phonetree/servlet/ivr subfolder. Add the following code: package com.twilio.phonetree.servlet.ivr; import com.twilio.twiml.TwiMLException; import com.twilio.twiml.VoiceResponse; import com.twilio.twiml.voice.Gather; import com.twilio.twiml.voice.Play; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; public class WelcomeServlet extends HttpServlet { @Override protected void doPost(HttpServletRequest servletRequest, HttpServletResponse servletResponse) throws IOException { VoiceResponse response = new VoiceResponse.Builder() .gather(new Gather.Builder() .action("/menu/show") .numDigits(1) .say(new Say.Builder("Hello! Please press 1. Press anything else to repeat the message.") .build()) .build()) .build(); servletResponse.setContentType("text/xml"); try { servletResponse.getWriter().write(response.toXml()); } catch (TwiMLException e) { throw new RuntimeException(e); } } } The WelcomeServlet class handles HTTP requests. The doPost takes in two parameters, HttpServletRequest servletRequest and HttpServletResponse servletResponse. An instance of VoiceResponse is created using the VoiceResponse.Builder class. This object is used to construct a Twilio XML response, typically used for generating voice responses in telephony applications. Inside the VoiceResponse object, the gather method is used to prompt the caller to enter input. It specifies the action URL to "/menu/show" and the number of digits to collect as 1. After configuring the VoiceResponse object, the servlet sets the content type of the response to "text/xml". The try block attempts to write the XML response generated by the VoiceResponse object to the servlet response. If there is a TwiMLException during this process, it is caught, and the servlet throws a RuntimeException. However as the TwiML we've written is hard-coded the TwiMLException shouldn't be thrown. Create the menu for the interactive voice response app Create a file named ShowServlet.java inside the menu subdirectory. Paste the following import statements to the file: package com.twilio.phonetree.servlet.menu; import com.twilio.twiml.TwiMLException; import com.twilio.twiml.VoiceResponse; import com.twilio.twiml.voice.Gather; import com.twilio.twiml.voice.Hangup; import com.twilio.twiml.voice.Say; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; Write the class definition for the ShowServlet class right below: public class ShowServlet extends HttpServlet { @Override protected void doPost(HttpServletRequest servletRequest, HttpServletResponse servletResponse) throws IOException { String selectedOption = servletRequest.getParameter("Digits"); VoiceResponse response; switch (selectedOption) { case "1": response = getOptions(); break; default: response = com.twilio.phonetree.servlet.common.Redirect.toMainMenu(); } servletResponse.setContentType("text/xml"); try { servletResponse.getWriter().write(response.toXml()); } catch (TwiMLException e) { throw new RuntimeException(e); } } } The selectedOption string retrieves the value of the "Digits" parameter from the incoming POST request. In the context of a Twilio IVR system, the "Digits" parameter typically contains the user's response, which is the digit they pressed on their phone's keypad. The switch statement checks the value of the user's input and determines the appropriate response based on their choice. If the user pressed "1," it calls the getOptions() method to generate a TwiML response which requires the app to gather more input from the user. If the user's choice doesn't match "1" it redirects the user to the main menu using the Redirect.toMainMenu() method. The ShowServlet class handles incoming Twilio IVR requests, processes the user's input, determines the appropriate TwiML response based on their choice, and sends the response back to Twilio for further interaction with the caller. Write a response for the interactive voice response to read aloud Create the getOptions() functions underneath the doPost() function: private VoiceResponse getOptions() { VoiceResponse response = new VoiceResponse.Builder() .gather(new Gather.Builder() .action("/commuter/connect") .numDigits(1) .build()) .say(new Say.Builder( "Welcome to Party Cookie Dessert of the Day!" + "Press 2 to check the status of your delivery." + "Press 3 to hear the collection of cookies available.") .voice(Say.Voice.POLLY_AMY) .language(Say.Language.EN_GB) .loop(3) .build() ).build(); return response; } The Gather verb is used to collect our caller's input after they press "2". This time, the action verb points to the options route, which will switch our response based on what the caller chooses. These separate functions will use the Twilio VoiceResponse object to build a response that will be read aloud over the phone. The object is built using TwiML attributes in order to make the response object loop 3 times and speak in the British English female voice, Amy. To make this IVR more interactive, this app will redirect the user to new phone number lines when they press "2" or "3". Write the default response redirect Under the common subdirectory, create a file named Redirect.java and add the following code: package com.twilio.phonetree.servlet.common; import com.twilio.twiml.VoiceResponse; import com.twilio.twiml.voice.Say; public final class Redirect { private Redirect() { } public static VoiceResponse toMainMenu() { VoiceResponse response = new VoiceResponse.Builder() .say(new Say.Builder("Returning to the main menu") .voice(Say.Voice.POLLY_AMY) .language(Say.Language.EN_GB) .build()) .redirect(new com.twilio.twiml.voice.Redirect.Builder("/ivr/welcome").build()) .build(); return response; } } If the caller does not press the appropriate numbers, the app will redirect them to this response. Connect caller response to another phone line Under the commuter subdirectory, create a file named ConnectServlet.java and add the following code: package com.twilio.phonetree.servlet.commuter; import com.twilio.phonetree.servlet.common.Redirect; import com.twilio.twiml.voice.Dial; import com.twilio.twiml.voice.Number; import com.twilio.twiml.TwiMLException; import com.twilio.twiml.VoiceResponse; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.HashMap; import java.util.Map; public class ConnectServlet extends HttpServlet { @Override protected void doPost(HttpServletRequest servletRequest, HttpServletResponse servletResponse) throws IOException { String selectedOption = servletRequest.getParameter("Digits"); Map<String, String> optionPhones = Map.of("2", "+1929XXXXXXX", "3", "+1726XXXXXXX"); VoiceResponse twiMLResponse = optionPhones.containsKey(selectedOption) ? dial(optionPhones.get(selectedOption)) : Redirect.toMainMenu(); servletResponse.setContentType("text/xml"); try { servletResponse.getWriter().write(twiMLResponse.toXml()); } catch (TwiMLException e) { throw new RuntimeException(e); } } private VoiceResponse dial(String phoneNumber) { Number number = new Number.Builder(phoneNumber).build(); return new VoiceResponse.Builder() .dial(new Dial.Builder().number(number).build()) .build(); } } Another function handling HTTP POST requests is made to intake the caller's response. To store the directory of multiple phone numbers in your app, use a HashMap. Please remember to replace the phone number in E.164 format. If the selectedOption exists in the optionPhones map it calls the dial method with the associated phone number. If the option doesn't exist, it redirects the call to the main menu. The dial() function builds a VoiceResponse object using the newly collected phone number from the HashMap directory. The app redirects the caller on the line to the other phone number. Configure the servlet XML is used to interpret data so when used with a servlet, it can store and transport data for the web application, which is necessary for dynamic websites. The servlet is named "welcome" and needs to be mapped to an associated set of URLs. That means every character between the url-pattern tag will be interpreted and matched up when interpreting the URL path on the web browser. For this project, the url-pattern is a forward slash, which is also the default match for mapping if not defined. Clear the existing file and paste in the following XML for the project: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" metadata-complete="true" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <servlet> <servlet-name>welcome</servlet-name> <servlet-class>com.twilio.phonetree.servlet.ivr.WelcomeServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>welcome</servlet-name> <url-pattern>/ivr/welcome</url-pattern> </servlet-mapping> <servlet> <servlet-name>show</servlet-name> <servlet-class>com.twilio.phonetree.servlet.menu.ShowServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>show</servlet-name> <url-pattern>/menu/show</url-pattern> </servlet-mapping> <servlet> <servlet-name>connect</servlet-name> <servlet-class>com.twilio.phonetree.servlet.commuter.ConnectServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>connect</servlet-name> <url-pattern>/commuter/connect</url-pattern> </servlet-mapping> </web-app> Compile and run the application View the full code in this GitHub repository. Run the following command in your terminal to clean and compile the application. If you need to upgrade your version of Gradle, you can run the gradle wrapper beforehand. ./gradlew appRun As the app is running on http://localhost:8080, expose the application to a public port such as ngrok using the command ngrok http 8080. Ngrok is a great tool because it allows you to create a temporary public domain that redirects HTTP requests to our local port 8080. If you do not have ngrok installed, follow the instructions on this article to set up ngrok. Your ngrok terminal will now look like the picture above. As you can see, there are URLs in the “Forwarding” section. These are public URLs that ngrok uses to redirect requests into our flask server. Configure Twilio service Go to the Twilio Console and navigate to the Phone Numbers section in order to configure the webhook. Test out the interactive voice response app Grab your cellular device and dial the phone number to test out the Party Cookie Dessert hotline. It's time to fulfill your customers' sweet tooths by selling desserts to them! What's next for interactive voice response applications in Java? Congratulations on building a small call center for a local dessert shop! Now that you have an IVR up and running, check out this article on how you can implement best practices for your call center. If you are looking for a customizable product to use at scale and build faster, you can build with Flex. For those looking to build faster with a team, consider building with Twilio Studio which requires no coding experience. Diane Phan is a developer on the Twilio Voices team. She loves to help programmers tackle difficult challenges that might prevent them from bringing their projects to life. She can be reached at dphan [at] twilio.com or LinkedIn.
Traditionally, Rust’s application areas have centered around building command-line interfaces (CLIs), embedded systems, and performance-critical applications. However, with the introduction of the async/await syntax in Rust 1.39, the Rust ecosystem has evolved significantly. It now offers a more accessible approach to creating web and desktop-based applications. What's more, Rust's security-first design, robust concurrency model, and efficient memory management features make it an ideal fit for developing applications in these domains. In this tutorial, we'll explore how to create a fully-functional REST API with Rust and Axum. We’ll go over setting up routes and API endpoints, handling API request queries, body and dynamic URL values, connecting your API to a MySQL database, and middleware integration, as well as tips to ensure that your API stays performant. Prerequisite To follow along with this guide, the following prerequisites are necessary: Familiarity with fundamental programming concepts such as functions, data structures, control flows, modules, and basic asynchronous programming Rust and Cargo set up on your system A MySQL database ready to go. For installation guidance, refer to the setup instructions for Mac/Linux and Windows What is Axum? Axum is a web framework that focuses on performance and simplicity. It utilizes the capabilities of the hyper library to enhance the speed and concurrency of web applications. Axum also brings Rust's async/await functionality to the forefront by integrating with the Tokio library, enabling the development of asynchronous APIs and web applications that are highly performant. Axum's fundamental functionality is based on the Tokio runtime, which provides Rust with the ability to manage non-blocking, event-driven activities seamlessly. This capability is critical for smoothly handling several concurrent processes. Furthermore, Axum is built with Rust's strong type system and ownership rules, which impose compile-time safeguards against common web development pitfalls, like data races and memory leaks. Additionally, Axum's modular design philosophy allows developers to create lightweight, focused apps by adding only the necessary components. Create a new Rust app To kick things off, let’s create a new Rust application by running the following commands. cargo new my_rest_api cd my_rest_api These commands generate a new Rust application for us; it creates a new cargo.toml file where we can manage our application dependencies, as well as a new src/main.rs file which has a Rust function that outputs "hello world" in the console. Install Axum, Tokio, and Serde The next step is to install the necessary dependencies for our application. For the sake of this tutorial, we’d install Serde and Tokio alongside Axum. Serde will be used for serialization and deserialization of JSON data because Rust lacks built-in functions to work with the JSON format. Tokio will be used to provide an asynchronous runtime because Rust lacks this functionality as well. To proceed, open cargo.toml and update the dependencies section with the configuration below. . . . [dependencies] axum = {version = "0.6.20", features = ["headers"]} serde = { version = "1.0", features = ["derive"] } serde_json = "1.0.68" tokio = { version = "1.0", features = ["full"] } Next, install the dependencies by running the following command: cargo build Running this command downloads the packages (from Crates.io) to your project. If you’ve previously worked with JavaScript and npm, this is the equivalent of adding packages to your package.json file and running npm install to install them. Hello, Rust! Now that we have all the necessary packages installed, let's dive in and extend the default endpoint. Open the default src/main.rs file and update it with the following code: use axum::{routing::get, Router}; #[tokio::main] async fn main() { let app = Router::new().route("/", get(|| async { "Hello, Rust!" })); println!("Running on http://localhost:3000"); axum::Server::bind(&"0.0.0.0:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } In the first line of the code above, we imported the Axum Router and used its get() routing method. Then, we utilized the #[tokio::main] line to bind the main() function to Tokio's runtime to make it asynchronous. After that, we defined a default route that responds to GET queries with "Hello, Rust!", and set it to listen on all interfaces on port 3000; so that it's also now available at http://localhost:3000. To start the application, run the following command in your terminal: cargo run After running this command, you should see the output "Running on http://localhost:3000" in your terminal. Visiting http://localhost:3000 in your browser, or using a tool like curl, will display the message "Hello, Rust!". Axum basics Routing and handlers In Axum, the routing mechanism is responsible for directing incoming HTTP requests to their designated handlers. These handlers are essentially functions that contain the logic for processing requests. This is a fancy way of saying that when we define a new endpoint we also define the function to process incoming requests to that endpoint; in the case of Axum, these functions are called handlers. The router object plays a pivotal role in this process, as it maps URLs to handler functions and specifies the HTTP methods that will be accepted for the endpoint. The example below illustrates this concept further. Open your src/main.rs file and replace its content with the code below. use axum::{ body::Body, http::StatusCode, response::{IntoResponse, Response}, routing::{get, post}, Json, Router, }; use serde::Serialize; #[derive(Serialize)] struct User { id: u64, name: String, email: String, } // Handler for /create-user async fn create_user() -> impl IntoResponse { Response::builder() .status(StatusCode::CREATED) .body(Body::from("User created successfully")) .unwrap() } // Handler for /users async fn list_users() -> Json<Vec<User>> { let users = vec![ User { id: 1, name: "Elijah".to_string(), email: "elijah@example.com".to_string(), }, User { id: 2, name: "John".to_string(), email: "john@doe.com".to_string(), }, ]; Json(users) } #[tokio::main] async fn main() { // Define Routes let app = Router::new() .route("/", get(|| async { "Hello, Rust!" })) .route("/create-user", post(create_user)) .route("/users", get(list_users)); println!("Running on http://localhost:3000"); // Start Server axum::Server::bind(&"127.0.0.1:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } The code above demonstrates Axum routing and handlers in action. We use Router::new() to define our application's routes, specifying the HTTP methods GET, POST, PUT, DELETE, etc., — and their corresponding handler functions. Take the /users route as an example; it was defined to handle GET requests, and the list_users() function was set as its handler. This, in turn, is an async function that basically returns a JSON array of two predefined users. Furthermore, since Rust does not natively support the JSON format, the User struct has Serde's Serialize trait enabled, so as to allow the conversion of user instances to JSON. The /create-user route, on the other hand, accepts POST requests, and its handler is the create_user() function. This function returns a status code of 201 via status(StatusCode::CREATED) as well as a static response, "User created successfully". To try things out, restart the code and use the following curl command to send a POST request to the /create-user endpoint. curl -X POST http://localhost:3000/create-user You should see the message "User created successfully". Or, visit /users in your browser, where you should see the list of the static users we defined displayed, as shown below. Extractors Extractors in Axum are a powerful feature that parse and transform parts of an incoming HTTP request into typed data for your handler functions. They enable you to effortlessly access request parameters, such as path segments, query strings, and bodies, in a type-safe manner. GET request with path and query extractors For example, to capture dynamic URL values as well as query strings, we could easily specify them in our handler function, as well as the expected value. Update your src/main.rs file with the code below to see this in action. use axum::{ extract::{Path, Query}, routing::get, Router, }; use serde::Deserialize; // A struct for query parameters #[derive(Deserialize)] struct Page { number: u32, } // A handler to demonstrate path and query extractors async fn show_item(Path(id): Path<u32>, Query(page): Query<Page>) -> String { format!("Item {} on page {}", id, page.number) } #[tokio::main] async fn main() { let app = Router::new().route("/item/:id", get(show_item)); axum::Server::bind(&"127.0.0.1:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } In this example, we are able to define a dynamic URL with the /path/:id pattern. This is a pretty common syntax in other languages too. Additionally, the show_item() handler uses the path extractor to capture an item's ID from the URL along with the query extractor to get the page number from the query string. When a request is made to this endpoint, Axum takes care of invoking the correct handler and providing the extracted data as arguments. Try it out by restarting the application, then running the following curl command: curl "http://localhost:3000/item/42?number=2" You should see "item 42 on page 2" printed to the terminal. POST request with JSON body extractor For POST requests, where you often need to handle data sent in the request body, Axum provides the JSON extractor to parse JSON data into a Rust type. Update src/main.rs with the code below. use axum::{extract::Json, routing::post, Router}; use serde::Deserialize; // A struct for the JSON body #[derive(Deserialize)] struct Item { title: String, } // A handler to demonstrate the JSON body extractor async fn add_item(Json(item): Json<Item>) -> String { format!("Added item: {}", item.title) } #[tokio::main] async fn main() { let app = Router::new().route("/add-item", post(add_item)); axum::Server::bind(&"127.0.0.1:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } In the example above, we defined a new /add-item endpoint that accepts POST requests. In its handler function, add_item(), we use the JSON extractor to parse the incoming JSON body into the Item struct. This shows how straightforward Axum is with parsing the incoming request body. You can try this example out by restarting the application and running the following command: curl -X POST http://localhost:3000/add-item \ -H "Content-Type: application/json" \ -d '{"title": "Some random item"}' We should get the response "Added item: Some random item", once it is executed. Error handling Axum provides a way to handle errors uniformly across your application. Handlers can return Result types, which can be used to gracefully handle errors and return appropriate HTTP responses. An example of error handling in a handler function is shown below. To see this in action, update src/main.rs with the following code and restart your app. use axum::{ extract::Path, http::StatusCode, response::IntoResponse, routing::delete, Json, Router, }; use serde::Serialize; #[derive(Serialize)] struct User { id: u64, name: String, } // Define a handler that performs an operation and may return an error async fn delete_user(Path(user_id): Path<u64>) -> Result<Json<User>, impl IntoResponse> { match perform_delete_user(user_id).await { Ok(_) => Ok(Json(User { id: user_id, name: "Deleted User".into(), })), Err(e) => Err(( StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to delete user: {}", e), )), } } // Hypothetical async function to delete a user by ID async fn perform_delete_user(user_id: u64) -> Result<(), String> { // Simulate an error for demonstration if user_id == 1 { Err("User cannot be deleted.".to_string()) } else { // Logic to delete a user... Ok(()) } } #[tokio::main] async fn main() { let app = Router::new().route("/delete-user/:user_id", delete(delete_user)); println!("Running on http://localhost:3000"); axum::Server::bind(&"0.0.0.0:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } In the example above, we defined a /delete-user/:user_id route to hypothetically delete the user with the given user_id. And, in its handler function, delete_user, we attempt to delete a user with another hypothetical perform_delete_user() function. If successful, we return an Ok variant with a dummy user JSON response. If there's an error, we return an Err variant with an HTTP 500 (Internal Server Error) status and an error message. You can test the /delete-user endpoint with the following curl command: curl -X DELETE http://localhost:3000/delete-user/1 This command sends a delete request to the /delete-user endpoint with a user ID of 1. Based on the code provided, this should trigger the error condition and return an error response. However, if you want to test a successful deletion scenario, replace 1 with any other number. For example: curl -X DELETE http://localhost:3000/delete-user/2 This should simulate a successful deletion and return a successful response. Advanced techniques in Axum Now that we've covered the fundamentals of Axum, let's explore some additional capabilities that are essential to building a robust API. Database integration Integrating a database is a critical step in API development. Luckily, Axum works seamlessly with any asynchronous Rust database library. For this example, we'll integrate a MySQL database using the sqlx crate, which supports async/await and is compatible with Axum's async nature. To proceed, make sure your MySQL service is running in the background. Next, add SQLx and the corresponding MySQL feature to your Cargo.toml file, by adding the dependency below, along with the Tokio runtime. sqlx = { version = "0.7.2", features = ["runtime-tokio", "mysql"] } Then, run the following command to fetch the new dependencies: cargo build With this setup in place, you can now establish a connection pool to your MySQL database with the MySqlPool::connect() method, as shown below, replacing the placeholders in the definition of database_url. use axum::{routing::get, Router}; use sqlx::MySqlPool; #[tokio::main] async fn main() { let database_url = "mysql://<<USERNAME>>:<<PASSWORD>>@<<HOSTNAME>>/<<DATABASE NAME>>"; let pool = MySqlPool::connect(&database_url) .await .expect("Could not connect to the database"); let app = Router::new().route("/", get(|| async { "Hello, Rust!" })); println!("Running on http://localhost:3000"); axum::Server::bind(&"0.0.0.0:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } With the connection pool ready, you can now start performing database queries in your functions using the following syntax: async fn fetch_data(pool: MySqlPool) -> Result<Json<MyDataType>, sqlx::Error> { let data = sqlx::query_as!(MyDataType, "SELECT * FROM my_table") .fetch_all(&pool) .await?; Ok(Json(data)) } However, because we are integrating with Axum, our handler functions need to take Extension<MySqlPool> as an argument; this allows Axum to provide the MySqlPool to the handler function when a request is made to our endpoints. Say, for example, you want your endpoint to return all users in your MySQL database. First, create a new table named users in your MySQL database with the following structure. create table users ( id int primary key auto_increment, name varchar(200) not null, email varchar(200) not null ); Then, run the following command to add new entries to this table. INSERT INTO users (id, name, email) VALUES (1, 'Alice Smith', 'alice.smith@example.com'), (2, 'Bob Johnson', 'bob.johnson@example.com'), (3, 'Charlie Lee', 'charlie.lee@example.com'), (4, 'Dana White', 'dana.white@example.com'), (5, 'Evan Brown', 'evan.brown@example.com'); Once your table is set up, you would typically setup Axum to work with SQLx like this. use axum::{extract::Extension, response::IntoResponse, routing::get, Json, Router, Server}; use serde_json::json; use sqlx::{MySqlPool, Row}; // Define the get_users function as before async fn get_users(Extension(pool): Extension<MySqlPool>) -> impl IntoResponse { let rows = match sqlx::query("SELECT id, name, email FROM users") .fetch_all(&pool) .await { Ok(rows) => rows, Err(_) => { return ( axum::http::StatusCode::INTERNAL_SERVER_ERROR, "Internal server error", ) .into_response() } }; let users: Vec<serde_json::Value> = rows .into_iter() .map(|row| { json!({ "id": row.try_get::<i32, _>("id").unwrap_or_default(), "name": row.try_get::<String, _>("name").unwrap_or_default(), "email": row.try_get::<String, _>("email").unwrap_or_default(), }) }) .collect(); (axum::http::StatusCode::OK, Json(users)).into_response() } #[tokio::main] async fn main() { // Set up the database connection pool let database_url = "mysql://<<USERNAME>>:<<PASSWORD>>@<<HOSTNAME>>/<<DATABASE_NAME>>"; let pool = MySqlPool::connect(&database_url) .await .expect("Could not connect to the database"); // Create the Axum router let app = Router::new() .route("/users", get(get_users)) .layer(Extension(pool)); // Run the Axum server Server::bind(&"127.0.0.1:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } In this updated example, we’d updated our app router to add a new extension definition, passing in our MySQL connection pool, and in our handler function, we are now able to access our connection pool because of this change. Also, from the data returned from our SQL query, we’re sending the id, name, and email row to be returned in our API endpoint. After replacing the code in src/main.rs with the code example above, start your application and open http://localhost:3000/users in your browser. You should see output similar to the one below, depending on the data stored in your own database. Middleware Middleware in Axum allows you to perform operations on the request before it reaches the handler, and on the response before it's sent back to the client. This is useful for tasks like logging, authentication, and setting common response headers. Here's how you can add a simple logging middleware: use axum::{ body::Body, http::Request, middleware::{self, Next}, response::Response, routing::get, Router, Server, }; async fn logging_middleware(req: Request<Body>, next: Next<Body>) -> Response { println!("Received a request to {}", req.uri()); next.run(req).await } #[tokio::main] async fn main() { let app = Router::new() .route("/", get(|| async { "Hello, world!" })) .layer(middleware::from_fn(logging_middleware)); Server::bind(&"127.0.0.1:3000".parse().unwrap()) .serve(app.into_make_service()) .await .unwrap(); } Now, anytime you visit any endpoint, it is logged to the console in the following manner: Received a request to /users Received a request to / Received a request to /test Received a request to /todos Tips to ensuring performance The built-in characteristics of Rust, such as its unique approach to concurrency, zero-cost abstractions and a strong type system, create the foundation for high-performance API development. These benefits are magnified when utilizing Axum. However, to further improve the performance of your API, focus on mastering Rust's ownership and borrowing principles to effectively manage memory. Reduce lock contention for shared resources and carefully select serialization methods to avoid bottlenecks, preferring more efficient formats and libraries whenever possible, as we've seen in the examples used in this article. In addition, writing efficient code is not enough; it's also crucial to engage in regular profiling to identify and address performance bottlenecks early with tools like Criterion. For example, use Criterion to benchmark a critical function: use criterion::{black_box, criterion_group, criterion_main, Criterion}; fn process_data(data: &[u8]) -> usize { // Simulate data processing data.len() } fn benchmark(c: &mut Criterion) { c.bench_function("process_data", |b| { b.iter(|| process_data(black_box(&[1, 2, 3, 4, 5]))) }); } criterion_group!(benches, benchmark); criterion_main!(benches); Furthermore, keep your Rust compiler and dependencies updated to benefit from the latest optimizations. Following these recommendations could significantly improve the efficiency and scalability of your APIs, ensuring they function effectively under varying loads while maintaining the resilience that Rust and Axum provide. That's how to build high-performance REST APIs with Rust and Axum Throughout this article, we've explored the process of building a high-performance REST API with Rust and Axum. We've delved into the framework's robust features, from routing to error handling, and touched on advanced topics like database integration and middleware. We also looked at several pointers that could help you improve your API performance. For more hands-on experience, you can find the complete code used in this article on GitHub. Thanks for reading! Elijah Asaolu is a technical writer and software engineer. He frequently enjoys writing technical articles to share his skills and experience with other developers.
As far as modern applications are concerned, there are fewer things more important than an efficient communication medium between the client and server. Traditionally, RESTful APIs are the go-to choice for many developers, offering a structured approach to data exchange. However, GraphQL has challenged the status quo in recent years as it solves the problem of under-fetching or over-fetching - a common occurrence in RESTful communications. Building a GraphQL server has been well-documented for several languages but not so much for Rust, so in this article, I will show you how to bring all that Rust-y goodness to the world of GraphQL by building one using Juniper. Juniper is a GraphQL server library for Rust that helps you build servers with minimal boilerplate and configuration. Additionally, by using Rust, performance and type safety are guaranteed. What you will build In this article, you will build the GraphQL server for a bird API. This API holds data for endangered species and has four key entities: Bird: This entity holds information on the bird such as the common name, scientific name etc Threat: This entity corresponds to a potential threat to a bird such as poaching Attribute: This entity corresponds to a bird’s attribute as identified by an attributor Attributor: This entity holds information on the attributor To get familiar with writing queries your server will be able to handle the following queries: Get all birds Get a single bird To get familiar with writing mutations, your server will be able to handle the following mutations: Add a new attribute Delete an existing attribute Your API will save data to a MySQL database, with Diesel as an ORM. Because Juniper does not provide a web server, you will use Rocket to handle requests and provide the appropriate responses. The Juniper integration with Rocket also embeds GraphiQL for easy debugging. Requirements To follow this tutorial, you will need the following: A basic understanding of Rust Rust ≥ 1.67 and Cargo Access to a MySQL database A MySQL client, such as the MySQL Command-Line Client The MySQL C API, as Cargo needs the MySQL headers to install Diesel CLI. Additionally, Diesel recommends using the Diesel CLI for managing your database. It will be used in this tutorial to manage migrations. You can install it (with only the MySQL feature) using the following command: cargo install diesel_cli --no-default-features --features mysql If you encounter issues while running this command, check out the Diesel Getting Started guide, or try removing --no-default-features from the above command. Get started Where you create your Rust projects, create a new Rust project and change into it using the following commands. cargo new graphql_demo --bin cd graphql_demo Add project dependencies In your editor or IDE of choice, update the dependencies section of the Cargo.toml file to match the following. [dependencies] diesel = { version = "2.1.0", features = ["mysql", "r2d2"] } dotenvy = "0.15.7" juniper = "0.15.11" juniper_rocket = "0.8.2" rocket = { version = "=0.5.0-rc.2" } Here’s what each crate does: Diesel: Diesel is the ORM that will be used to interact with the database. The MySQL feature is specified to provide the requisite API for interacting with a MySQL-based database. The r2d2 feature will be used to set up a connection pool for the database. Dotenvy : Dotenvy helps with loading environment variables. It is a well-maintained version of the dotenv crate. Juniper: Juniper is a GraphQL server library for Rust Juniper_rocket: Juniper_rocket is an integration that allows you to build GraphQL servers with Juniper, and serve them with Rocket. Rocket: Rocket will be used for handling incoming requests and returning appropriate responses. Lock project dependencies Between the review and publishing stage of this tutorial, some third party crates have released updates which would break your application. As a short term solution (pending updates to the Juniper crate), you can download this Cargo.lock file to the project’s top level folder. This will ensure that all the crate versions are compatible and that your application will run as expected. Set the required environment variable(s) Next, create a new file called .env in the project's top-level folder. Then, in the configuration entry below, replace the placeholder values with your database credentials and paste it into .env. DATABASE_URL=mysql://<<DB_USERNAME>>:<<DB_PASSWORD>>@<<DB_HOST_OR_IP>>:<<DB_PORT>>/bird_db Set up database Next, create your database using the following command. diesel setup After that, create a migration for your database, by running the following command. This migration will create the database tables, and seed them: diesel migration generate create_tables You will see a response similar to the one below: Creating migrations/2023-09-13-090548_create_tables/up.sql Creating migrations/2023-09-13-090548_create_tables/down.sql For each migration, the up.sql file contains the SQL for changing the database. The commands to revert the changes in up.sql will be stored in the down.sql file. Replace the contents of the newly created up.sql and down.sql migration files to match the respective files in this Gist. Next, apply the changes in the migrations using the following command: diesel migration run When the migration has run, you can check that the database has been updated successfully using the following SQL command with your MySQL client of choice. SELECT TABLE_NAME, TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'bird_db'; The resulting table should match the one shown below: In addition to setting up the database, the Diesel CLI created a new file named schema.rs in the src folder. This file contains macros based on your table structure, making it easier for you to interact with the database. Have a read about Diesel’s schema if you'd like to know more. Next, add a module for the database. In the src folder, create a new file named database.rs and add the following code to it. use diesel::r2d2::{ConnectionManager, Pool, PoolError}; use diesel::MysqlConnection; use dotenvy::dotenv; use std::env; pub type MysqlPool = Pool<ConnectionManager<MysqlConnection>>; fn init_pool(database_url: &str) -> Result<MysqlPool, PoolError> { let manager = ConnectionManager::<MysqlConnection>::new(database_url); Pool::builder().build(manager) } fn establish_connection() -> MysqlPool { dotenv().ok(); let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set"); init_pool(&database_url).unwrap_or_else(|_| panic!("Could not create database pool")) } pub struct Database { pub pool: MysqlPool, } impl Database { pub fn new() -> Database { Database { pool: establish_connection(), } } } The first declaration in this module is a type declaration named MySqlPool, for the database pool. Next, a function named init_pool() is declared. This function takes a string corresponding to the database URL and returns a Result enum. If the pool was successfully built, the Ok variant of the result will be the earlier declared type (MySqlPool). The next function (establish_connection()) retrieves the DATABASE_URL environment variable and passes it to the init_pool. The result is unwrapped and returned to the function caller. In the event that an error is encountered, the application will panic and shut down. Next, a Database struct is declared. This struct has only one field named pool of type MySqlPool. Finally, a function named new() is implemented for the Database struct. This function calls the establish_connection() function to create a new database pool. Declare models In addition to the Attribute, Attributor, Bird, and Threat models mentioned in the What you will build section, your application will include the following models: BirdThreat: This model links a bird to an associated threat. BirdResponse: This model corresponds to the GraphQL response when a query is made for a single bird. AttributeInput: This model corresponds to the expected type of the mutation argument to add a new Attribute for a bird. AttributeResponse: This model corresponds to the GraphQL response when a mutation for adding a new attribute is received. In the src folder, create a new file named model.rs and add the following code to it: use diesel::prelude::*; use juniper::{GraphQLInputObject, GraphQLObject}; use crate::schema::*; #[derive(GraphQLObject, Queryable, Insertable, Selectable, Identifiable, Associations)] #[diesel(belongs_to(Bird))] #[diesel(belongs_to(Attributor))] #[diesel(table_name = attribute)] pub struct Attribute { pub id: i32, pub bird_id: i32, pub attributor_id: i32, pub bio: String, pub link: String, } #[derive(GraphQLObject, Queryable, Identifiable)] #[diesel(table_name = attributor)] pub struct Attributor { pub id: i32, pub name: String, pub bio: String, } #[derive(GraphQLObject, Queryable, Identifiable, Selectable)] #[diesel(table_name = bird)] pub struct Bird { pub id: i32, pub common_name: String, pub commonwealth_status: String, pub nsw_status: String, pub profile: String, pub scientific_name: String, pub stats: String, pub stats_for: String, } #[derive(GraphQLObject, Queryable, Identifiable, Selectable)] #[diesel(table_name = threat)] pub struct Threat { pub id: i32, pub name: String, } #[derive(GraphQLObject, Queryable, Selectable, Identifiable, Associations)] #[diesel(belongs_to(Bird))] #[diesel(belongs_to(Threat))] #[diesel(table_name = bird_threat)] #[diesel(primary_key(bird_id, threat_id))] pub struct BirdThreat { pub bird_id: i32, pub threat_id: i32, } #[derive(GraphQLObject)] pub struct BirdResponse { pub bird: Bird, pub threats: Vec<Threat>, pub attributes: Vec<Attribute>, } #[derive(GraphQLInputObject, Insertable)] #[diesel(table_name = attribute)] pub struct AttributeInput { pub bird_id: i32, pub attributor_id: i32, pub bio: String, pub link: String, } #[derive(GraphQLObject)] pub struct AttributeResponse { pub bird: Bird, pub attributor: Attributor, pub bio: String, pub link: String, } Structs (or Enums) with the GraphQLObject attribute, are exposed to GraphQL — allowing you to query for specific fields. In the same vein, the GraphQLInputObject attribute exposes structs as input objects. The Associations, Identifiable, Insertable, Queryable, and Selectable attributes are provided by Diesel for a simplified means of interacting with the database. Implement GraphQL functionality In the src folder, create a new file named resolver.rs and add the following code to it. use diesel::prelude::*; use juniper::{EmptySubscription, FieldResult, graphql_object, RootNode}; use crate::{database::Database, model::*, schema::*}; impl juniper::Context for Database {} pub type Schema = RootNode<'static, Query, Mutation, EmptySubscription<Database>>; pub struct Query; #[graphql_object(context = Database)] impl Query { fn birds(#[graphql(context)] database: &mut Database) -> FieldResult<Vec<Bird>> { use crate::schema::bird::dsl::*; use diesel::RunQueryDsl; let connection = &mut database.pool.get()?; let bird_response = bird.load::<Bird>(connection)?; Ok(bird_response) } fn bird( #[graphql(context)] database: &mut Database, #[graphql(description = "id of the bird")] search_id: i32, ) -> FieldResult<BirdResponse> { let connection = &mut database.pool.get()?; let bird_response = bird::table.find(&search_id).first(connection)?; let bird_threats = bird_threat::table .filter(bird_threat::bird_id.eq(&search_id)) .select(bird_threat::threat_id) .load::<i32>(connection)?; let threats_response = threat::table .filter(threat::id.eq_any(&bird_threats)) .load::<Threat>(connection)?; let bird_attributes = attribute::table .filter(attribute::bird_id.eq(&search_id)) .load::<Attribute>(connection)?; Ok(BirdResponse { bird: bird_response, threats: threats_response, attributes: bird_attributes, }) } } pub struct Mutation; #[graphql_object(context = Database)] impl Mutation { fn new_attribute( #[graphql(context)] database: &mut Database, attribute_input: AttributeInput, ) -> FieldResult<AttributeResponse> { let connection = &mut database.pool.get()?; diesel::insert_into(attribute::table) .values(&attribute_input) .execute(connection)?; let bird_response = bird::table .find(&attribute_input.bird_id) .first(connection)?; let attributor_response = attributor::table .find(&attribute_input.attributor_id) .first(connection)?; Ok(AttributeResponse { bird: bird_response, attributor: attributor_response, bio: attribute_input.bio, link: attribute_input.link, }) } fn remove_attribute( #[graphql(context)] database: &mut Database, attribute_id: i32, ) -> FieldResult<String> { let connection = &mut database.pool.get()?; diesel::delete(attribute::table.filter(attribute::id.eq(attribute_id))) .execute(connection)?; Ok("Attribute deleted successfully".to_string()) } } The first step is to make the Database struct usable by Juniper. This is done by making it implement the Context marker trait. Next, a Schema type is declared. This combines the Query and Mutation (defined afterwards). The application does not support subscriptions, so the EmptySubscription struct provided by Juniper will be used instead. Next, the Query struct is declared. It also has the graphql_object attribute which gives it access to the application’s shared state (the database in this case). This makes the database available to the resolver functions declared within the Query struct. In the same manner, the Mutation struct is declared, marked with the graphql_object attribute, and the corresponding mutation functions declared as struct implementations. Putting it all together You’ve built the database, declared your models, and implemented your GraphQL functionality. All that’s left is for you to add are some endpoints to expose your GraphQL server via Rocket. To do this, open the main.rs file in the src folder and update the code in it to match the following. use database::Database; use resolver::{Query, Schema, Mutation}; use juniper::EmptySubscription; use rocket::{response::content, State}; mod database; mod model; mod schema; mod resolver; #[rocket::get("/")] fn graphiql() -> content::RawHtml<String> { juniper_rocket::graphiql_source("/graphql", None) } #[rocket::get("/graphql?<request>")] fn get_graphql_handler( context: &State<Database>, request: juniper_rocket::GraphQLRequest, schema: &State<Schema>, ) -> juniper_rocket::GraphQLResponse { request.execute_sync(schema, context) } #[rocket::post("/graphql", data = "<request>")] fn post_graphql_handler( context: &State<Database>, request: juniper_rocket::GraphQLRequest, schema: &State<Schema>, ) -> juniper_rocket::GraphQLResponse { request.execute_sync(schema, context) } #[rocket::main] async fn main() { let _ = rocket::build() .manage(Database::new()) .manage(Schema::new( Query, Mutation, EmptySubscription::<Database>::new(), )) .mount( "/", rocket::routes![graphiql, get_graphql_handler, post_graphql_handler], ) .launch() .await .expect("server to launch"); } The graphiql() function is the entry point to the application and serves the GraphQL playground as a response, while the get_graphql_handler() and post_graphql_handler() functions are used to handle GraphQL requests and return the appropriate response. In the main() function, a new Rocket instance is created using the build() function. Then a Database instance, and Schema instance are passed to the manage() function, which enables Rocket’s state management for both resources. Finally the instance is launched via the launch() function. Running the application If you haven't already, download this Cargo.lock file to the project’s top level folder to avoid issues with some third party crates. Then, run the application using the following command. cargo run By default, the application will be served on port 8000. Open https://localhost:8000 in your browser. Get all birds Paste the following code to get all birds. query GetAllBirds{ birds{ id, commonName, scientificName, commonwealthStatus, profile } } For each bird, you will receive the id, commonName, scientificName, commonwealthStatus, and profile as shown below. Get a single bird Use the following query to retrieve the details for a single bird. query GetBird{ bird(searchId: 3){ bird{ nswStatus } threats{ name } attributes{ link, bio, } } } For the returned bird, you will receive the commonwealth status. In addition you will see the associated threats (only by name), and the bird attributes (link and bio) as shown below. Add new attribute Use the following to send a mutation which adds a new attribute for the specified bird. mutation AddNewAttribute($attribute: AttributeInput!) { newAttribute(attributeInput: $attribute) { bird { commonName } attributor { name } link } } For the $attribute variable, add a query variable as follows: { "attribute": { "birdId": 1, "attributorId": 3, "link": "https://localhost:8000", "bio": "https://www.blogger.com/profile/05959326240924026673" } } Delete an attribute Use the following command to send a mutation which deletes an attribute from the database. mutation deleteAttribute{ removeAttribute(attributeId: 28) } There you have it! Well done! I bet building the GraphQL server was easier than you expected. With five small modules, you were able to set up a shiny new GraphQL server. Not only that, you made it performant by setting up a connection pool for your MySQL database. Pretty neat right? There’s still some other things to try out, such as adding more queries and mutations to expand the application’s functionality. The entire codebase is available on GitHub should you get stuck at any point. I’m excited to see what else you come up with. Until next time, make peace not war ✌🏾 Joseph Udonsak is a software engineer with a passion for solving challenges – be it building applications, or conquering new frontiers on Candy Crush. When he’s not staring at his screens, he enjoys a cold beer and laughs with his family and friends. Find him at LinkedIn, Medium, and Dev.to.
CRUD is an acronym for four basic operations that can be performed on a database: Create, Read, Update, and Delete. They are a set of operations commonly used in database and database management systems for viewing and modifying data. This tutorial will teach you the essentials of implementing CRUD operations in CakePHP. It illustrates how users can create, read, update, and delete records, thus providing a guide to managing data in your application in CakePHP. Prerequisites: Before we dive into the tutorial, make sure you have the following: Basic knowledge of PHP and web development concepts PHP 8.2 installed with the PDO MySQL extension Access to a MySQL server Composer installed globally Bootstrap a new CakePHP application To install CakePHP, navigate to the folder where you want to scaffold the project and run this command: composer create-project --prefer-dist cakephp/app:~4.0 cakephp_crud \ && cd cakephp_crud When asked “Set Folder Permissions ? (Default to Y) [Y,n]?”, answer with Y. The new CakePHP project will be available in a directory named cakephp_crud, and you'll have changed to that directory. Configure the database Once we've created, the next step is to connect to the database before starting up our development server. To do that, open up the project folder in your preferred code editor or IDE and navigate to config\app_local.php. Where it says Datasource and default section, change the default configuration by changing the host, username, password, and database properties to match the credentials of your database, like so: From the image above, the host was changed to 127.0.0.1, the username to root, the password was left blank, and the database was set to the one created earlier. Set up the database for the project To begin, we need a database with a table to store information about users. Create a database. You can name it anything, but I'm naming mine "crud". The next thing is to create a new table in your database called data using the migrations feature in CakePHP. The table needs to contain the following fields: id: This field will serve as the unique identifier for each user. It should have a type of integer and be the table's primary index, with an auto-increment attribute attached to it. name: This field will store the name of the data input. It should have a data type of varchar with a size of 255. email: This field will have a datatype of varchar phone_no: This will also have a datatype of varchar To do this, open up the terminal and run this command: bin/cake bake migration CreateData This will create a migrations file in config/Migrations/ ending with _CreateData.php. Open that file in your preferred text editor or IDE and replace the body of the change() function with the following: $table = $this->table('data'); $table->addColumn('name', 'string', [ 'default' => null, 'limit' => 255, 'null' => false, ]); $table->addColumn('email', 'string', [ 'default' => null, 'limit' => 255, 'null' => false, ]); $table->addColumn('phone_no', 'string', [ 'default' => null, 'limit' => 255, 'null' => false, ]); $table->create(); Next, run this command to run the migration: bin/cake migrations migrate This will create a table called data in the database. Now, start the development server in the terminal by running this command: bin/cake server Create a model and an entity Creating and configuring the model and entity will be the next step. A model contains the information of the table where we will perform CRUD operations. The entity defines the columns for value assignment. To create a model, navigate to the src\Model\Table directory and create a file called DataTable.php. Then, paste the following code into the file: <?php declare(strict_types=1); namespace App\Model\Table; use Cake\ORM\Query; use Cake\ORM\RulesChecker; use Cake\ORM\Table; use Cake\Validation\Validator; class DataTable extends Table { /** * Initialize method * * @param array $config The configuration for the Table. * @return void */ public function initialize(array $config): void { parent::initialize($config); $this->setTable('data'); $this->setDisplayField('name'); $this->setPrimaryKey('id'); } /** * Default validation rules. * * @param \Cake\Validation\Validator $validator Validator instance. * @return \Cake\Validation\Validator */ public function validationDefault(Validator $validator): Validator { $validator ->scalar('name') ->maxLength('name', 255) ->requirePresence('name', 'create') ->notEmptyString('name'); $validator ->email('email') ->requirePresence('email', 'create') ->notEmptyString('email'); $validator ->scalar('phone_no') ->maxLength('phone_no', 255) ->requirePresence('phone_no', 'create') ->notEmptyString('phone_no'); return $validator; } } The code above defines a model class named DataTable. This class represents the application's database table, data. It initializes the table's configuration, specifying its name, display field, and primary key. It also defines validation rules for the name, email, and phone_no fields, ensuring data integrity and adherence to specified constraints. Moving on to create an entity, this time inside the src\Model/Entity folder, create a file called Data.php and paste the following into the file: <?php declare(strict_types=1); namespace App\Model\Entity; use Cake\ORM\Entity; /** * Data Entity * * @property int $id * @property string $name * @property string $email * @property string $phone_no */ class Data extends Entity { /** * @var array<string, bool> */ protected $_accessible = [ 'name' => true, 'email' => true, 'phone_no' => true, ]; } The code above defines an entity class named Data, that represents an individual record in a database table. The entity has properties for id, name, email, and phone_no, each with specific data types. CakePHP provides a time-saving and efficient way for us to create the model and entity. We can also achieve the above task of creating the model and entity by running this command: bin/cake bake model Data Running this command will create the model file DataTable.php inside the src/Model/Table folder. Also, we should see the entity file Data.php inside the src/Model/Entity folder. Create the controller The controller governs the application flow. Inside this controller file is where the CRUD methods will be. These methods handle create, read, delete, and update methods. To create a controller, in src\Controller create a file called DataController.php, and paste the following code into the file: <?php declare(strict_types=1); namespace App\Controller; /** * Data Controller * * @property \App\Model\Table\DataTable $Data * @method \App\Model\Entity\Data[]|\Cake\Datasource\ResultSetInterface paginate($object = null, array $settings = []) */ class DataController extends AppController { /** * Index method * * @return \Cake\Http\Response|null|void Renders view */ public function index() { $data = $this->paginate($this->Data); $this->set(compact('data')); } /** * View method * * @param string|null $id Data id. * @return \Cake\Http\Response|null|void Renders view * @throws \Cake\Datasource\Exception\RecordNotFoundException When record not found. */ public function view($id = null) { $data = $this->Data->get($id, [ 'contain' => [], ]); $this->set(compact('data')); } /** * Add method * * @return \Cake\Http\Response|null|void Redirects on successful add, renders view otherwise. */ public function add() { $data = $this->Data->newEmptyEntity(); if ($this->request->is('post')) { $data = $this->Data->patchEntity($data, $this->request->getData()); if ($this->Data->save($data)) { $this->Flash->success(__('The data has been saved.')); return $this->redirect(['action' => 'index']); } $this->Flash->error(__('The data could not be saved. Please, try again.')); } $this->set(compact('data')); } /** * Edit method * * @param string|null $id Data id. * @return \Cake\Http\Response|null|void Redirects on successful edit, renders view otherwise. * @throws \Cake\Datasource\Exception\RecordNotFoundException When record not found. */ public function edit($id = null) { $data = $this->Data->get($id, [ 'contain' => [], ]); if ($this->request->is(['patch', 'post', 'put'])) { $data = $this->Data->patchEntity($data, $this->request->getData()); if ($this->Data->save($data)) { $this->Flash->success(__('The data has been saved.')); return $this->redirect(['action' => 'index']); } $this->Flash->error(__('The data could not be saved. Please, try again.')); } $this->set(compact('data')); } /** * Delete method * * @param string|null $id Data id. * @return \Cake\Http\Response|null|void Redirects to index. * @throws \Cake\Datasource\Exception\RecordNotFoundException When record not found. */ public function delete($id = null) { $this->request->allowMethod(['post', 'delete']); $data = $this->Data->get($id); if ($this->Data->delete($data)) { $this->Flash->success(__('The data has been deleted.')); } else { $this->Flash->error(__('The data could not be deleted. Please, try again.')); } return $this->redirect(['action' => 'index']); } } So here's a breakdown of the CRUD methods in the controller: Create: The add() method is responsible for creating a new record. It first creates an empty entity using newEmptyEntity(), then patches the entity with data from the request using patchEntity(), and finally saves the entity to the database using save() Read: The index() method retrieves all records from the database using paginate(), and the view() method retrieves a single record by its ID using get() Update: The edit() method retrieves a record by its ID using get(), patches the entity with data from the request using patchEntity(), and then saves the updated entity to the database using save(). Delete: The delete() method retrieves a record by its ID using get() and then deletes the record from the database using delete(). CakePHP provides a time-saving and efficient way for us to create controllers. We can also achieve the above task of creating controllers by running this command: bin/cake bake controller DataController This will create a controller for us as well as the codes. Create the templates The CRUD operations will need view files (templates) to achieve this. Navigate to the templates folder and create a folder called Data. Inside this data folder, create four files: add.php, edit.php, index.php, view.php. The add.php creates a user interface for adding new data records to a database table. The edit.php creates a user interface for editing an existing data record. The index.php creates a template to display a list of data records in a tabular format. The view.php creates a template to display the details of a single data record. Paste the following into add.php: <?php /** * @var \App\View\AppView $this * @var \App\Model\Entity\Data $data */ ?> <div class="row"> <aside class="column"> <div class="side-nav"> <h4 class="heading"><?= __('Actions') ?></h4> <?= $this->Html->link(__('List Data'), ['action' => 'index'], ['class' => 'side-nav-item']) ?> </div> </aside> <div class="column-responsive column-80"> <div class= "data form content"> <?= $this->Form->create($data) ?> <fieldset> <legend><?= __('Add Data') ?></legend> <?php echo $this->Form->control('name'); echo $this->Form->control('email'); echo $this->Form->control('phone_no'); ?> </fieldset> <?= $this->Form->button(__('Submit')) ?> <?= $this->Form->end() ?> </div> </div> </div> For edit.php, paste this: <?php /** * @var \App\View\AppView $this * @var \App\Model\Entity\Data $data */ ?> <div class="row"> <aside class="column"> <div class="side-nav"> <h4 class="heading"><?= __('Actions') ?></h4> <?= $this->Form->postLink( __('Delete'), ['action' => 'delete', $data->id], ['confirm' => __('Are you sure you want to delete # {0}?', $data->id), 'class' => 'side-nav-item'] ) ?> <?= $this->Html->link(__('List Data'), ['action' => 'index'], ['class' => 'side-nav-item']) ?> </div> </aside> <div class="column-responsive column-80"> <div class= "data form content"> <?= $this->Form->create($data) ?> <fieldset> <legend><?= __('Edit Data') ?></legend> <?php echo $this->Form->control('name'); echo $this->Form->control('email'); echo $this->Form->control('phone_no'); ?> </fieldset> <?= $this->Form->button(__('Submit')) ?> <?= $this->Form->end() ?> </div> </div> </div> For index.php, paste this: <?php /** * @var \App\View\AppView $this * @var iterable<\App\Model\Entity\Data> $data */ ?> <div class=" data index content"> <?= $this->Html->link(__('New Data'), ['action' => 'add'], ['class' => 'button float-right']) ?> <h3><?= __('Data') ?></h3> <div class="table-responsive"> <table> <thead> <tr> <th><?= $this->Paginator->sort('id') ?></th> <th><?= $this->Paginator->sort('name') ?></th> <th><?= $this->Paginator->sort('email') ?></th> <th><?= $this->Paginator->sort('phone_no') ?></th> <th class="actions"><?= __('Actions') ?></th> </tr> </thead> <tbody> <?php foreach ($data as $data) : ?> <tr> <td><?= $this->Number->format($data->id) ?></td> <td><?= h($data->name) ?></td> <td><?= h($data->email) ?></td> <td><?= h($data->phone_no) ?></td> <td class="actions"> <?= $this->Html->link(__('View'), ['action' => 'view', $data->id]) ?> <?= $this->Html->link(__('Edit'), ['action' => 'edit', $data->id]) ?> <?= $this->Form->postLink(__('Delete'), ['action' => 'delete', $data->id], ['confirm' => __('Are you sure you want to delete # {0}?', $data->id)]) ?> </td> </tr> <?php endforeach; ?> </tbody> </table> </div> <div class="paginator"> <ul class="pagination"> <?= $this->Paginator->first('<< ' . __('first')) ?> <?= $this->Paginator->prev('< ' . __('previous')) ?> <?= $this->Paginator->numbers() ?> <?= $this->Paginator->next(__('next') . ' >') ?> <?= $this->Paginator->last(__('last') . ' >>') ?> </ul> <p><?= $this->Paginator->counter(__('Page {{page}} of {{pages}}, showing {{current}} record(s) out of {{count}} total')) ?></p> </div> </div> For view.php, paste this: <?php /** * @var \App\View\AppView $this * @var \App\Model\Entity\Data $data */ ?> <div class="row"> <aside class="column"> <div class="side-nav"> <h4 class="heading"><?= __('Actions') ?></h4> <?= $this->Html->link(__('Edit Data'), ['action' => 'edit', $data->id], ['class' => 'side-nav-item']) ?> <?= $this->Form->postLink(__('Delete Data'), ['action' => 'delete', $data->id], ['confirm' => __('Are you sure you want to delete # {0}?', $data->id), 'class' => 'side-nav-item']) ?> <?= $this->Html->link(__('List Data'), ['action' => 'index'], ['class' => 'side-nav-item']) ?> <?= $this->Html->link(__('New Data'), ['action' => 'add'], ['class' => 'side-nav-item']) ?> </div> </aside> <div class="column-responsive column-80"> <div class=" data view content"> <h3><?= h($data->name) ?></h3> <table> <tr> <th><?= __('Name') ?></th> <td><?= h($data->name) ?></td> </tr> <tr> <th><?= __('Email') ?></th> <td><?= h($data->email) ?></td> </tr> <tr> <th><?= __('Phone No') ?></th> <td><?= h($data->phone_no) ?></td> </tr> <tr> <th><?= __('Id') ?></th> <td><?= $this->Number->format($data->id) ?></td> </tr> </table> </div> </div> </div> CakePHP provides a time-saving and efficient way for us to create the templates. We can also achieve the above task of creating the add.php, edit.php, index.php, view.php, by running this command: bin/cake bake template Data Running this command will create the files and the code as well. Test the application Refresh the browser so you can see the changes. After that, you can check out the CRUD project by navigating to this link http://localhost:8765/data. Create a new data by clicking on the New Data button, fill out the details. You can also view, edit, and delete data, like the animation below. That's how to implement CRUD operations in CakePHP So, in this article, we looked at how to perform crud operations in CakePHP. Understanding how to create, read, update, and delete data is fundamental to building robust, secure, and user-friendly web applications. CakePHP's built-in ORM (Object-Relational Mapping) system simplifies database interactions. Please share if you found this helpful! Temitope Taiwo Oyedele is a software engineer and technical writer. He likes to write about things he’s learned and experienced.
Data is king, so managing it efficiently is essential! One common requirement of working with data is exporting data from a database to a CSV (Comma-Separated Values) file, a universal format for sharing structured information. In this tutorial, you will learn how to export data from a MySQL database to a CSV file with the CakePHP framework. Prerequisites Before we dive into the tutorial, make sure you have the following: Basic knowledge of PHP and web development concepts PHP 8.2 installed with the PDO MySQL extension Access to a MySQL server Composer installed globally Create a CakePHP Project To do this, navigate to the folder where you want to install the project and run this command: composer create-project --prefer-dist cakephp/app:~4.0 cakephp_csv When asked, "Set Folder Permissions ? (Default to Y) [Y,n]?", answer with Y. This will install the latest version of CakePHP in a new directory named cakephp_csv. Create the database To begin, we need a database with a table to store the information which will be exported to a CSV file. To keep things simple, the database will store details about a list of workers, including their name, email address, and mobile phone number. Create a database. I'll be naming mine fiie_test. The next thing is to, create a new table in your database called workers using the migrations feature in CakePHP. The table needs to contain the following fields: id: This field will serve as the unique identifier for each record. It should have a type of integer and be the table's primary index, with the auto-increment attribute attached to it. name: This field will store the worker's name. It should have a data type of varchar email: This field will store the worker's email address and have a datatype of varchar mobile: this field will store the worker's mobile phone number, and also have a type of varchar To do this, open up the terminal and run this command: bin/cake bake migration CreateWorkers This will create a migrations file in config/Migrations/. Navigate to the just created migrations file and replace the change() function with this: public function up(): void { $table = $this->table('workers'); $table->addColumn('name', 'string', [ 'limit' => 255, 'null' => false, ]); $table->addColumn('email', 'string', [ 'limit' => 255, 'null' => false, ]); $table->addColumn('mobile_no', 'string', [ 'limit' => 255, 'null' => false, ]); $table->create(); $data = [ [ 'name' => 'temi tope', 'email' => 'test@gmail.com', 'mobile_no' => '1234567895', ], [ 'name' => 'john doe J', 'email' => 'john@gmail.com', 'mobile_no' => '7412589635', ], [ 'name' => 'babtunde tolulope', 'email' => 'tolu@gmail.com', 'mobile_no' => '9632587410', ], [ 'name' => 'anonymous', 'email' => 'anon@gmail.com', 'mobile_no' => '8529637410', ], [ 'name' => 'oyedele', 'email' => 'oyedele@gmail.com', 'mobile_no' => '9658741230', ], [ 'name' => 'koded', 'email' => 'koded@gmail.com', 'mobile_no' => '2635897410', ], [ 'name' => 'lorem ipsum', 'email' => 'lorem@gmail.com', 'mobile_no' => '8526937410', ], [ 'name' => 'asaolu', 'email' => 'asaolu@gmail.com', 'mobile_no' => '8974563210', ], ]; $table->insert($data)->save(); } Next, run this command to create the table schema: bin/cake migrations migrate This will not only create a workers table but also insert the data into the database. if you look at the contents of the workers table, it should contain data matching those in the screenshot below. Connect to the database To connect the database to the application, open the project folder in your preferred code editor or IDE and open config\app_local.php. In the default section, inside the Datasource section, update the default configuration by changing the host, username, password, and database properties to match the credentials of your database. For example: From the image above, the host was changed to 127.0.0.1, the username to root, the password was left blank, and the database was set to the one created earlier. Now, start the development server in your terminal, by running this command: bin/cake server If you open http://localhost:8765, it should look similar to the screenshot below. Create a model and an entity To create a model and entity, open up a new terminal and run this command: bin/cake bake model Workers --no-validation --no-rules Running this command will create the model file WorkersTable.php inside the src/Model/Table folder. Also, we should see the entity file Workers.php inside the src/Model/Entity folder. Create a controller To create a controller, open up the terminal once again and run this command: bin/cake bake controller Details --no-actions Running this command will create a file called DetailsController.php file inside the src/Controller folder. Open this file and paste the following into it. <?php declare(strict_types=1); namespace App\Controller; class DetailsController extends AppController { public function initialize(): void { parent::initialize(); $this->loadModel("Workers"); } public function downloadCSVReport() { $this->autoRender = false; $workers = $this->Workers->find()->toList(); header('Content-Type: text/csv; charset=utf-8'); header('Content-Disposition: attachment; filename=workers-' . date("Y-m-d-h-i-s") . '.csv'); $output = fopen('php://output', 'w'); fputcsv($output, array('Id', 'Name', 'Email', 'Mobile')); if (count($workers) > 0) { foreach ($workers as $worker) { $worker_row = [ $worker['id'], ucfirst($worker['name']), $worker['email'], $worker['mobile'] ]; fputcsv($output, $worker_row); } } } } The code above generates a CSV report of worker data from a database and prompts the browser to download the file. To do this, it initializes the controller, loads a model, and sets HTTP headers for CSV content. It then retrieves worker data from the workers table and writes it to the output stream of the PHP process (php://output); which becomes the body of the response. Add a Route Adding a route will be the final step. Navigate to config/routes.php and inside the call to $routes->scope(), paste the following: $builder->connect( '/download-csv', ['controller' => 'Details', 'action' => 'downloadCSVReport'] ); Test the application Now, let’s check that the application works as expected by opening to http://localhost:8765/download-csv in your preferred browser. You'll notice that a CSV file with the name workers-{datetime}.csv will be downloaded. Its contents should match those in the screenshot below. Conclusion In this article, we learned how to export data from a database to CSV in CakePHP. By following the steps outlined in this tutorial, you're now equipped with the essential knowledge to perform this task using CakePHP. Happy coding! Temitope Taiwo Oyedele is a software engineer and technical writer. He likes to write about things he’s learned and experienced.
Docker is a versatile containerization tool that simplifies the management of the essential components that power your web application. What's more, it saves you the stress of grappling with various independent tools and configurations. In this article, we will explore how Docker can be used in Laravel development to: Streamline the process of serving your Laravel applications locally Migrate Laravel applications across different computers or servers Eliminate software compatibility concerns Deploy an application to a remote server Before we delve deeper though, it's important to note that within the Laravel development ecosystem, Laravel Sail serves as the standard for Docker integration. Sail simplifies the process of working with Docker in Laravel, offering a user-friendly approach for developers — especially those without prior Docker experience. However, in this tutorial, you will get an in-depth exploration of Docker — in the context of Laravel. Then, rather than relying on Laravel Sail's pre-configured environment, you will learn how to run Laravel inside a Docker and deploy it with Docker Compose. This deeper dive will enhance your understanding of how Laravel Sail works under the hood, empowering you to not only leverage its advantages, but also troubleshoot any potential issues that may arise during usage, or when making custom configurations with Sail. By the end of this tutorial, you should be able to use Docker to assemble your Laravel application like a well-organized LEGO set, allowing you to construct and operate it seamlessly. Prerequisites A Digital Ocean account Docker Engine Docker Compose PHP 8.2 Composer installed globally Prior experience with Laravel development would be ideal, but not mandatory What is Docker? Imagine you're making a cake. Instead of baking it in your own kitchen, you use a special portable kitchen. This portable kitchen has everything you need – ingredients, an oven, mixing bowls, etc. This is like a Docker image. It's a self-contained environment that holds everything your Laravel app needs to run – the code, the supporting files, and the command line tools, etc. It keeps everything organized and separate from your computer's setup, just like the portable kitchen keeps your cake-making separate from your home kitchen. This makes it super easy to move your app between different computers or servers without worrying if they have the right software installed. Key Docker terms Now, let's cover a few of the key Docker terms that you need to be familiar with. Image: Think of an image as a blueprint for what your application needs, much like a recipe guiding Docker in creating a specific environment for your Laravel app. An image includes your code, the web server, and any required tools. These images are akin to ready-to-bake cake mixes, waiting to be transformed into containers at runtime. Container: A container is analogous to the actual cake baked from the recipe (image). They represent isolated virtual environments where your Laravel application operates, shielded from external influences. Multiple containers are often used simultaneously, each hosting different components of the application, such as the application's database, web server, and caching server. Dockerfile: Think of a Dockerfile as a set of step-by-step instructions for Docker to build an image. It's akin to documenting the process of mixing ingredients and baking a cake. Within a Dockerfile, you define your app's requirements, such as the PHP version, necessary packages, configuration settings, and environment variables. Docker Compose: Docker Compose is a tool that simplifies the management of multiple Docker containers by allowing you to define, configure, and run them as a single application, streamlining complex deployments and ensuring seamless communication between containers. Why should you use Docker? There are three great reasons for using Docker: Development: Docker gives you a consistent environment across all developers' machines. No more "It works on my machine" issues! Everyone uses the same setup, so the code behaves the same for everyone. Plus, you can quickly start and stop containers as you work on different parts of your app. Testing: With Docker, you can create an image that mirrors your production server. This means you can test your app in an environment that's identical to where it will actually run. Bugs and issues are easier to catch before they reach users. Deployment: Docker containers can be easily moved from one host to another. So, the setup that worked on your local development machine will work on the deployment server too. You package everything neatly into an image, which you can then deploy to your production server. This consistency reduces deployment problems. Dockerize a Laravel application With all this said, let’s create and configure a Laravel application powered by Docker. First, create a new Laravel project using Composer, and change into the new project directory with the commands below. composer create-project laravel/laravel laravel-docker-project cd laravel-docker-project Then, open .env in your preferred text editor or IDE and make the following three changes: Set DB_HOST to database. This needs to match the hostname of the container containing the database. Set DB_USERNAME to laravel. It's best to use an account other than root to connect to a database. Set a value for DB_PASSWORD Laravel applications require multiple services to function. Let's create the default Docker Compose configuration file, to define the services required by our Laravel application. The file will contain settings applicable whether the application is running locally or in production. The environment-specific settings will be added in additional configuration files. In the root of your Laravel application create a new file named docker-compose.yaml, like so: touch docker-compose.yml If you're using Microsoft Windows, or would just prefer to, create the file with your preferred text editor or IDE. In this file, we will define all the services needed for our Laravel application to function. We can start and stop these services using Docker Compose. Let's get started by defining our database service. version: '3.8' services: database: image: mysql:8.0 ports: - 3306:3306 environment: - MYSQL_DATABASE=${DB_DATABASE} - MYSQL_ALLOW_EMPTY_PASSWORD='YES' - MYSQL_USER=${DB_USERNAME} - MYSQL_PASSWORD=${DB_PASSWORD} healthcheck: test: mysql -h localhost -u root -e 'SELECT version();' start_period: 5s interval: 15s timeout: 5s retries: 5 volumes: - db-data:/var/lib/mysql volumes: db-data: ~ Let’s talk a little about some of the configuration above: image: This points to the base image that this image will be built from. Specifically it will use the official Docker Hub MySQL image. ports: Here, we are defining the port mappings between our local development machine and the Docker container (or between the host and the guest). The port on the left is the port of the local machine while the port on the right is the port on the container. environment: This is where we specify the database credentials needed to connect our Laravel application to the mysql server container. These are assigned from environment variables of the same name in the environment where the container is started, from the .env file of our Laravel application. healthcheck: This ensures the database container is fully started, not just running. If not, the PHP container will not be able to run the database migrations when it starts up. volumes: Docker Volumes make it possible to persist data. Here, we assigned a volume where the changes would be stored, on the local filesystem. Next, we need to create a Redis service. Add the following after the database service in docker-compose.yml,: redis: image: redis:alpine command: redis-server --appendonly yes --requirepass "${REDIS_PASSWORD}" ports: - 6379:3679 Similar to the database service, the Redis service above defines the image that the service is based on and its port mappings. In addition, it defines the command which will be run when Docker builds the Redis image (command). The command requires the Redis password (${REDIS_PASSWORD}), which is obtained from the .env file of our Laravel application. Next, we need to configure a PHP service for our application to run. Add the following lines to the end of the services section in docker-compose.yaml. php: build: context: . dockerfile: docker/php/Dockerfile target: php args: - APP_ENV=${APP_ENV} command: /opt/post-start.sh environment: - APP_ENV=${APP_ENV} - CONTAINER_ROLE=app volumes: - ./:/var/www/html depends_on: database: condition: service_healthy redis: condition: service_started The PHP service is a little different to the others. Lets go over some of its key concepts: Now, create a file named Dockerfile in a new directory docker/php. Note, this file does not have a file extension. Add the configuration below to the file. FROM php:8.1.24-apache-bookworm as php RUN docker-php-ext-install pdo pdo_mysql bcmath RUN pecl install -o -f redis \ && rm -rf /tmp/pear \ && docker-php-ext-enable redis ENV APACHE_DOCUMENT_ROOT /var/www/html/public RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf \ && sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf \ && a2enmod rewrite COPY ./docker/php/post-start.sh /opt/ RUN chmod -v +x /opt/post-start.sh This file builds a custom image for the PHP container. The image will be based on the 8.1.24-apache-bookworm tag of the official Docker Hub PHP image. It only makes a few additions, those being adding the PDO MySQL, BC Math, and Redis extensions for PHP. Then, in the docker/php directory, create a new file named post-start.sh, and add the following code to it. #!/bin/bash set -m apache2-foreground & php artisan migrate --env=development chown -R www-data:www-data /var/www/html/storage chmod -R 755 /var/www/html/storage fg %1 This is a small shell script that overrides the PHP image's default CMD instruction. The reason for doing this is to ensure that the database migrations are run during startup, in addition to starting Apache. That way, the application's ready to use, without requiring any manual intervention. The script starts Apache and puts it in the background. Then, it uses Laravel's Artisan Console to run the database migrations. After that, updates the ownership and permissions of the storage directory, so that Laravel's log file can be written to by the web server user (www-data). Finally, it brings Apache back into the foreground, listening for requests. Database migrations can be destructive. So it’s not, always, smart to run them in production. However, it seemed acceptable for the purposes of a simplistic example. Next, create a file named Dockerfile in a new docker/node directory, and add the code below to it. FROM node:14-alpine as node WORKDIR <<Path/To/Your/Project>> COPY . . RUN npm install This file builds a custom image for the node container. The image will be based on the 14-alpine tag of the official Docker Hub Nodejs image. We also need Node.js to handle JavaScript related tasks in our Laravel application. Let's define a service for it. Add the following to the end of the definition in docker-compose.yml, after the php service: node: build: context: . dockerfile: docker/node/Dockerfile target: node volumes: - ./node_modules:/var/www/html/node_modules tty: true Create a development Docker Compose configuration file Now, create a new file in the project's top-level directory named docker-compose.dev.yml. This file has additional directives that are only applicable when deploying the application locally, in development. In it, add the configuration below. version: '3.8' services: database: ports: - "3306:3306" redis: ports: - "6739:6739" php: ports: - "8000:80" The changes in this configuration file map ports in the database (3306), redis (6739), and php (8080) containers to ports on the host (the local development machine). In the case of the database and redis ports, this is so that, if required, we can use clients to interact with them, such as MySQL's command line client. Start the Docker containers First up, run the command below. ln -s docker-compose.dev.yml docker-compose.override.yml This symlinks the development configuration file to Docker Compose's second configuration file, docker-compose.override.yml. If Docker Compose finds this file, it will merge the configuration directives in the file with those in the default configuration file. Now, run the command below to serve up the application. docker-compose up --build If you need to stop the container configuration, use the following command. docker-compose down The command spins up all of the services we have configured and prints out information about them in your terminal. Your terminal should look like the image below, after the command is run. In a separate terminal tab or window, run docker-compose ps command to see all the running services. Your terminal should look similar to the image below. Implement authentication in our application At this point, our services are running. To ensure that everything works, we need to interact with our database. Let's implement authentication using Laravel Breeze as a way of testing that our Docker configuration works completely. Install Laravel Breeze using the command below. composer require laravel/breeze --dev Next, we need to run the following commands to set up Laravel Breeze. php artisan breeze:install Test our application locally Now you can test that the application works. Start it up, again, by running docker compose up --detach. Then, open http://0.0.0.0:8000/ in your browser. In your browser, you should see the application running, similar to the screenshot below. Let’s create a new user to see that our authentication works. Head to the /register route by clicking on the Register link and create a new user, like so. After creating the user, you should be redirected to the application dashboard like so. Now, we are sure our Laravel application works when powered by Docker Compose. If you'd like to dive deeper into Docker Compose and learn loads more, such as how to debug Docker Compose configurations, download Deploy with Docker Compose. It's free. Deploy the Laravel application to production with Docker Compose Prepare the application for deployment Good job, if you’ve gotten to this point. Now, let's dive a little deeper by deploying our application to a cloud host service. For the purpose of this tutorial, we will deploy to DigitalOcean.However, the steps are basically the same for most service providers, as long as you have SSH access. Let's get started by creating a Digital Ocean Droplet. However, before we proceed, update your APP_ENV value in .env from development to production like so. APP_ENV=production Create a production Docker Compose configuration file Now, create a new file in the project's top-level directory named docker-compose.prod.yml. This file has additional directives that are only applicable when deploying the application to production. In it, add the configuration below. version: '3.8' services: php: ports: - "80:80" There's very little going on in this configuration. It just maps port 80 in the service to port 80 on the host. That way, the application can be accessed on the standard HTTP port, when deployed. Set up a new DigitalOcean droplet Creating a DigitalOcean Docker Droplet is very straightforward, requiring just a few clicks. First, open the Docker app in the DigitalOcean Marketplace. Then, click Create Docker Droplet. From there: Choose the region nearest to you, which should also set the datacenter Leave the image, size, and cpu options set to their defaults Leave the authentication method set to SSH Key and select the applicable SSH key to use to access the droplet Finally, click Create Droplet After about a few minutes, the new droplet should be ready to use. Next, we need to ssh into our droplet using its IP address. To ssh into the server as the root user, replace <<Your Droplet's IP-Address>> in the command below, with the IP address of your droplet, and run it. ssh root@<<Your Droplet's IP-Address>> Next, let's create a non-root user who can deploy the application, using the commands below. adduser deployment usermod -aG docker deployment mkdir /home/deployment/.ssh This command prompts you to create a password for the user. After creating the password, skip through all the other prompts until the user is created. Lastly, so that the deployment user can login to the droplet, in a new terminal session, create an SSH public key for the deployment user, then copy it to the deployment user's .ssh directory, by running the command below (after replacing the placeholder). scp <path to the public key> root@<<Your Droplet's IP-Address>>:/home/deployment/.ssh/ Then, in your original terminal session, set the deployment user as the owner of the public key that you just uploaded for them, by running the command below. chown -Rv deployment:deployment /home/deployment/.ssh Copy the project files to the droplet With that done, log in as the deployment user and create a new directory, named laravel_and_docker, in their home directory, by running the command below. This is where our application will be deployed. mkdir /home/deployment/laravel_and_docker Now, we need to copy the files from our local machine to the droplet. There is more than one way of doing this, which you'll see in future tutorials. For the purpose of this tutorial, we will make use of the rsync command, which has a very simple syntax, rsync (options) (location project directory) (server IP address). In a third terminal session, run the command below to copy the project files from our local machine to our server. rsync -avzh \ --no-links \ --exclude=storage/ \ . \ deployment@<<Your Droplet's IP-Address>>:/home/deployment/laravel_and_docker Deploy the application to production Now, you're ready to serve up your Laravel project. In the terminal session where you are logged in to the droplet as the deployment user, change into the project directory, symlink the production configuration file to Docker Compose override file, and start the application by running the commands below. cd laravel_and_docker rm docker-compose.override.yml ln -s docker-compose.prod.yml docker-compose.override.yml docker-compose up -d This command would build and serve our Docker containers and we can launch our project using the IP address. At this point you should be able to view your project on the browser, as you can see in the screenshot below. Now, your application should be accessible and function exactly the same way as what we have locally. Lets verify this by creating a new user account and login in with it. That's been a deep dive into Laravel Development with Docker In this tutorial, we've explored the powerful combination of Laravel development with Docker, enabling you to build, test, and deploy web applications with ease. By creating Docker images for PHP, MySQL, Redis, and Node.js, you've gained the ability to maintain a consistent and reproducible development environment. You can find the code on GitHub, if you got stuck at any point during the tutorial. Moses Anumadu is a software developer and online educator who loves to write clean, maintainable code. I create technical contents for technical audiences. You can find me at Laraveldev.pro. "oakland1" (in the tutorial's main image) by -tarat- is licensed under CC BY-NC-ND 2.0.
Looking for important logs in the pools of log files and data can be a pain at times during development, testing or debugging processes. If we can have a tool that gives a real-time report of critical, error and warning reports about the activities in our APIs, it will really make triaging and bug fixing a lesser issue for developers. Imagine a scenario when you get an alert on WhatsApp (personal or group) of incidents happening in your API as they happen, developers can readily remedy costly bugs in no time and maintain a good customer experience. Through this tutorial, you will learn how to integrate Twilio's WhatsApp API and Winston to a Node.js API, making incident/error reporting and troubleshooting as easy as possible. Prerequisites Here is a list of what you need to follow along in this tutorial Node.js installation Git installation A Node.js API that is already built. (You can use this example) A free Twilio account (sign up with Twilio for free). Install Ngrok and make sure it’s authenticated. Knowledge of API documentation using Swagger Setting Up Your Application To set up your APIs, I attached a link to a codebase that contains the base application used for this tutorial. It contains all the code necessary to start a Node.js server and some already-made endpoints that work once you connect to a MySQL database. This section will work you through how to run the project on your local machine, set up Twilio and any other requirements you need to build your solution Running the Node.js APIs To get the project running on our local machine, you can follow the steps below: Navigate to your terminal and clone the project from the GitHub repository by running the following command: git clone https://github.com/DesmondSanctity/twilio-log-alert.git Make sure you are in the APIs-only branch and then run the installation script within the project directory to install the needed packages: npm install After the packages have been installed, open up the project directory on your preferred IDE and then create a .env file and add the code below with their values: PORT=5000 DB_NAME=XXXXX DB_USERNAME=XXXXX DB_PASSWORD=XXXXX DB_HOST=XXXXX JWT_SECRET=XXXXX JWT_EXPIRES_IN=XXXXX TWILIO_AUTH_TOKEN=XXXXX TWILIO_ACCOUNT_SID=XXXXX REDIS_URL=XXXXX To set up MySQL on your local machine, you will download the XAMPP installer for your operating system from their official website and install it on your local machine. After installation, you will get this screen below when you run the application. Start the Apache and MySQL server by clicking the Start button in the Actions section for Apache and MySQL. When the server is started, you can navigate to PHPMyAdmin with the following link: http://localhost/phpmyadmin/. Or you can click on the Admin button where a web page will open to the XAMPP dashboard where you can access phpMyAdmin from. The phpMyAdmin page is shown below: To create a new database, you will click the New button by the left sidebar, add the name you want your database to have and create it. You can choose to set up a user account with a password to access your database or use the default root user with all Admin privileges and connect to your database. Click on the User accounts tab at the top to access the user accounts page. You will see all the users available, their username, password if any, host address and privileges. For this tutorial, you will use the root user with root as its username, localhost as its host, no password and all privileges. Then, proceed to use the details in your .env file. DB_NAME=alertdb DB_USERNAME=root DB_PASSWORD= DB_HOST=localhost Now our database is ready for connection. Once we establish a connection to it when we run our application, the necessary tables will auto-create and we can start reading or writing to the database. Running the code below can generate a random 64-bit ASCII string that can be used for encrypting JWT tokens. After you run the code, copy the token as the value for JWT_SECRET in your .env file: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))" The JWT_EXPIRES_IN variable in your .env file is used to moderate the time our JWT tokens will expire and users will have to login again to use the APIs. In this tutorial you will use 12h as the value which signifies 12 hours. You can fetch the REDIS_URL from any Redis instance set up by any provider. In this project, you will use Render’s Redis provisioning. After creating an account on Render, you can click on the New button in the dashboard to set up a Redis server as shown below: Enter a name for your Redis instance, choose the free tier and then click the Create Redis button to create the server as shown below: After creating the server, go to the dashboard, click on the Redis server you created to get the credential you will use to connect to it. Scroll down to the Access Control section and click Add source. Enter 0.0.0.0/0 for the Source and click Save. This allows access to your Redis instance from any server whether you're hosting on your own local environment or a cloud server. Now scroll up to the Connections section and copy the External Redis URL. Paste this value in your .env file for the REDIS_URL variable. Alternatively, you can use a local instance of Redis on your machine if you have one set up already. For Twilio credentials, you can get them from your account dashboard if you already have an account with Twilio or follow the step in the next section to set it up and you will see the keys as shown in the next section. Remember to add the .env file and any other file you may have that contains secret keys to the .gitignore file to avoid exposing them to the public Setting up Twilio Account To set up your Twilio account, sign up for an account and log into your Twilio Console using your account details. From the toolbar's Account menu, select API Keys and Tokens. Take note of your test credentials as shown in the photo below, which include the Account SID and Auth Token. Head over to your .env file and add these values to the TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN variables respectively. To complete the setup process, access your Twilio Console dashboard and navigate to the WhatsApp sandbox in your account. This sandbox is designed to enable you to test your application in a development environment without requiring approval from WhatsApp. To access the sandbox, select Messaging from the left-hand menu, followed by Try It Out and then Send A WhatsApp Message. From the sandbox tab, take note of the Twilio phone number and the join code. Adding Logging and Alert Functionality with Winston and Twilio In this section, you will delve into adding logging and alert functionality to your application. You will learn about Winston, a library that aims to decouple parts of the logging process in an API to make it more flexible and extensible. You will learn how to use it in a Node.js API, how to format logs, redact sensitive information and set it as a middleware to cover all your endpoints. You will also learn how to add the Twilio WhatsApp API function to send real-time messages for defined incidents in the API. Creating Logger Helpers Function In your codebase, you will install three new packages. Install them by entering the command below on your terminal: npm install winston winston-daily-rotate-file twilio This command will install the following packages: Winston: The tool you will use to get logs from every request and response in your APIs Winston Daily Rotate File: This package will help you organize your logs and save them in a file by the day they occur. This way, it is easier to find your logs by the day it occurred. Twilio: This is the Node.js package to connect to Twilio services and will be used to set up real-time WhatsApp messaging. After a successful installation, you are set to create some helper functions for your logging functionality. These files should be created in the src/utils/log directory. The first one is the sensitiveKeys.js file. In this file, you will make a list of items you do not want to appear in the logs. Sensitive data like user information, payment details, and confidential data as we defined it in our database, services and config files are to be stored here. For our API, the list is small but for larger apps, it should contain as much as you want to redact from the log. Create a folder within the src/utils folder called logs and within the logs folder create a file named sensitiveKeys.js. Once created, add in the following code: // Define sensitive keys you want to remove from logs export const SensitiveKeys = { UserId: "userId", Password: "password", //this is how the value is stored in the config, service or database. NewPassword: "newPassword", OldPassword: "oldPassword", RepeatPassword: "repeatPassword", PhoneNumber: "phoneNumber", Token: "token", Authorization: "authorization", }; Another helper function you will create is the constants.js file. This file is where we want to store some information about HTTP methods and HTTP headers and Response messages that stay the same throughout the entire codebase hence the name constants. Create a folder named constants in the src/utils folder and create the constants.js file in the new folder and add the code below: export const SuccessMessages = { CreateSuccess: "Resource created successfully", GetSuccess: "Resource retrieved successfully", UpdateSuccess: "Resource updated successfully", DeleteSuccess: "Resource deleted successfully", GenericSuccess: "Operation completed successfully", UserRemoveSuccess: "User removed!", ProductRemoveSuccess: "Product removed!", }; export const HTTPHeaders = { ResponseTime: "x-response-time", ForwardedFor: "x-forwarded-for", }; export const HTTPMethods = { HEAD: "HEAD", GET: "GET", POST: "POST", PATCH: "PATCH", PUT: "PUT", DELETE: "DELETE", }; The next helper function is the redactedData.js file. This file will take in the object from sensitiveData.js by their keys or value and replace them with ****** in the request or response JSON body that will be parsed by it. This way those sensitive data are not exposed in the log and in the alert sent to WhatsApp. Create the redactedData.js file inside the src/utils/log folder and add the following code inside the file: import { SensitiveKeys } from "./sensitiveKeys.js"; const sensitiveKeysList = Object.values(SensitiveKeys) export const redactLogData = (data) => { if (typeof data === 'object' && data !== null) { if (Array.isArray(data)) { return data.map(item => redactLogData(item)); } const redactedData = {}; for (const key in data) { if (sensitiveKeysList.includes(key)) { redactedData[key] = '******'; // replace sensitive data with * } else { // Recursively redact sensitive keys within nested objects redactedData[key] = redactLogData(data[key]); } } return redactedData; } else { return data; } }; Finally, for the helper functions, you will create indentation.js. This will help to define how we want to handle spacing and indentation in the log file seeing we are dealing with JSON objects most of the time. Create the indentation.js file in the src/util/log folder and add in the following code: export const LogIndentation = { None: 0, SM: 2, // Small MD: 4, // Medium LG: 6, // Large XL: 8, // XLarge XXL: 10, XXXL: 12, }; Creating the Twilio Function Next, you will write the functions for the alert messaging using Twilio. For this, you will create a new file named alertFunctions.js in the src/utils/alert directory. These functions will be responsible for activities like formatting the message that will be sent to WhatsApp to a readable format and sending the messages as well. Notice how the code uses the Twilio keys obtained from the setup at the beginning of this tutorial to create an instance of a Twilio client for your use. Create the alert folder in the src/utils folder and create the alertFunctions.js file within it. In your alertFunctions.js file, add the following lines of code: import twilio from "twilio"; import { accountSid, authToken } from "../../config/index.js"; // Initialize Twilio client with your credentials const twilioClient = new twilio(accountSid, authToken); // A formatted message to send to the user const formatErrorAlert = async ({ errorDescription, affectedEndpoint, startTime, duration, details, alertType, method, }) => { return ` *${alertType == "Critical" ? `⛔ Alert Type ` : `🚫 Alert Type: `}${alertType}*\n ⚠️ Error Description: ${errorDescription}\n 🌐 Affected Endpoint: ${affectedEndpoint}\n 🔗 HTTP Method: ${method}\n 🕒 Start Time: ${startTime}\n ⌛ Duration: ${duration}\n 📝 Details: ${JSON.stringify(details)}\n `; }; export const sendWhatsAppAlert = async (messageParams) => { const message = await formatErrorAlert(messageParams); try { await twilioClient.messages.create({ body: `New Incident Alert:\n ${message}`, from: "whatsapp:<your Twilio WhatsApp number>", to: "whatsapp:<your own number>", }); console.log(`WhatsApp Alert sent successfully.`); } catch (error) { console.error(`WhatsApp Alert error: ${error.message}`); } }; The function formatErrorAlert does the formatting of the message and structures it in a readable manner while the sendWhatsAppAlert function takes the formatted message as a parameter and sends it to the number designated to receive the alert. It is worth noting the following parameters: Body - this contains the alert content to be sent. From - The sender, who the message is coming from. To - The recipient, who is receiving the message. Replace the placeholder numbers with the Twilio WhatsApp phone number and your own personal number where the message will be sent respectively. The Twilio WhatsApp number is shown in your console where you connect to the sandbox. Creating The Logger Function Here you will start with setting up the middleware that will be plugged into our APIs to capture the logs. You will create a log instance using Winston, define all the configurations and settings and finally define a transport system for outputting the log. In the src/middlewares directory, create a new file name logger.js and add the code below to it: import { randomBytes } from "crypto"; import winston from "winston"; import { LogIndentation } from "../utils/log/indentation.js"; import DailyRotateFile from "winston-daily-rotate-file"; const { combine, timestamp, json, printf } = winston.format; const timestampFormat = "MMM-DD-YYYY HH:mm:ss"; const appVersion = process.env.npm_package_version; const generateLogId = () => randomBytes(16).toString("hex"); export const httpLogger = winston.createLogger({ format: combine( timestamp({ format: timestampFormat }), json(), printf(({ timestamp, level, message, ...data }) => { const response = { level, logId: generateLogId(), timestamp, appInfo: { appVersion, environment: process.env.NODE_ENV, proccessId: process.pid, }, message, data, }; // indenting logs for better readbility return JSON.stringify(response, null, LogIndentation.MD); }) ), transports: [ // log to console new winston.transports.Console({ // if set to true, logs will not appear silent: process.env.NODE_ENV === "test_env", // true/false }), // log to file, but rotate daily new DailyRotateFile({ // each file name includes current date filename: "logs/rotating-logs-%DATE%.log", datePattern: "MMMM-DD-YYYY", zippedArchive: false, // zip logs true/false maxSize: "20m", // rotate if file size exceeds 20 MB maxFiles: "14d", // max files }), ], }); The function above shows how to create a logger instance, the format configuration of what you want to log and how it should look like, and the transport which either prints to the console when on development or test environment.It also saves to file using the DailyRotateFile method from the winston-daily-rotate-file package. To learn more about how to set up Winston for logging check out this documentation. Next, you will write the function that formats your logs to readable JSON. This is where you will call the function that sends the alert to WhatsApp when certain conditions are met. You will also use the function that redacts sensitive information here to remove them from the formatted logs. Create a new file in the src/utils/log directory named formatLog.js and add the following code to it: import { sendWhatsAppAlert } from "../alert/alertFunctions.js"; import { HTTPHeaders } from "../constants/constants.js"; import { redactLogData } from "./redactedData.js"; const formatHTTPLoggerResponse = (req, res, responseBody, requestStartTime) => { let requestDuration = ""; let startTime = ""; const formattedBody = JSON.parse(responseBody); const textBody = { request: { host: req.headers.host, url: req.url, body: (req.body && redactLogData(req.body)) || {}, params: req?.params, query: req?.query, clientIp: req?.headers[HTTPHeaders.ForwardedFor] ?? req?.socket.remoteAddress, }, response: { statusCode: res.statusCode, requestDuration, body: redactLogData(formattedBody), }, } if (requestStartTime) { const endTime = Date.now() - requestStartTime; requestDuration = `${endTime / 1000}s`; // ms to seconds // Create a Date object from the timestamp const date = new Date(requestStartTime); // Format the date into a human-readable string startTime = date.toLocaleString(); } // message param for twilio alert const messageParams = { errorDescription: formattedBody?.message, affectedEndpoint: req.baseUrl, startTime: startTime, duration: requestDuration, details: redactLogData(textBody), alertType: res.statusCode >= 500 ? "Critical" : "Error", method: req.method, }; if (res.statusCode >= 400) { sendWhatsAppAlert(messageParams); } return { request: { headers: (req.headers && redactLogData(req.headers)) || {}, host: req.headers.host, baseUrl: req.baseUrl, url: req.url, method: req.method, body: (req.body && redactLogData(req.body)) || {}, params: req?.params, query: req?.query, clientIp: req?.headers[HTTPHeaders.ForwardedFor] ?? req?.socket.remoteAddress, }, response: { headers: res.getHeaders(), statusCode: res.statusCode, requestDuration, body: redactLogData(formattedBody), }, }; }; export default formatHTTPLoggerResponse; A few things to note about the code above: The responseBody is being parsed as a normal object to a JSON object. The textBody variable stores the details of the log we will share through WhatsApp. The messageParams are the parameters used in calling the sendWhatsAppAlert function to work All the message body parameters are parsed through the redactLogData function to remove sensitive data. If the status code is greater or equal to 400, it triggers the WhatsApp alert. The alert is tagged Critical for status code 500 and above and Error for status code 400 and below 500. Lastly, for the logging functionalities, you will create a file that intercepts all requests that happen in the application to pick up the logs. It will be used in the entry server file as a middleware above where the routes that it should intercept are defined or instantiated. This file will be created in the src/utils/log directory and named interceptor.js. Add the following code to the file: import formatHTTPLoggerResponse from "./formatLog.js"; import { HTTPMethods, SuccessMessages } from "../constants/constants.js"; import { httpLogger } from "../../middlewares/logger.js"; export const responseInterceptor = (req, res, next) => { // used to calculate time between request and the response const requestStartTime = Date.now(); // Save the original response method const originalSend = res.send; let responseSent = false; // Override the response method res.send = function (body) { if (!responseSent) { if (res.statusCode < 400) { httpLogger.info( getResponseMessage(req.method), formatHTTPLoggerResponse(req, res, body, requestStartTime) ); } else { httpLogger.error( body.message, formatHTTPLoggerResponse(req, res, body, requestStartTime) ); } responseSent = true; } // Call the original response method return originalSend.call(this, body); }; // Continue processing the request next(); }; function getResponseMessage(responseMethod) { switch (responseMethod) { case HTTPMethods.POST: return SuccessMessages.CreateSuccess; case HTTPMethods.GET: return SuccessMessages.GetSuccess; case HTTPMethods.PUT || HTTPMethods.PATCH: return SuccessMessages.UpdateSuccess; case HTTPMethods.DELETE: return SuccessMessages.DeleteSuccess; default: return SuccessMessages.GenericSuccess; } } In this code, the incoming request and response body are picked by the middleware, the original res.send method is stored and the body is overridden to be used as the log data first then reset back to the original before it continues to the next thing which is sending the original res.send body to the client. The getResponseMessage helper function does match the HTTP method in the response with the right message saved in the src/utils/constant/constant.js file. The logs are captured but first parsed through the formatHTTPLoggerResponse function we created earlier to get a formatted JSON object. Updating the Server File You will proceed to add the middleware to the server file which is our entry to the app. The file is located in the root directory as index.js. You will add the middleware before the route definitions so you can intercept the requests that will go through them. Update the code with the one below: import express from "express"; import cors from "cors"; import redis from "redis"; import bodyParser from "body-parser"; import { port, redisURL } from "./src/config/index.js"; import { AppError } from "./src/utils/responseHandler.js"; import { responseInterceptor } from "./src/utils/log/interceptor.js"; import swaggerDocs from "./swagger.js"; import "./src/models/index.js"; import userRouter from "./src/routes/users.js"; import authRouter from "./src/routes/auth.js"; const app = express(); app.use(cors()); app.use(express.json()); app.disable("x-powered-by"); // less hackers know about our stack app.use(bodyParser.urlencoded({ extended: false })); // Your middleware function to handle errors const errorHandler = (err, req, res, next) => { if (res.headersSent) { return next(err); } if (err instanceof AppError) { // If it's a CustomError, respond with the custom status code and message return res .status(err.statusCode) .json({ status: err.status, error: err.message, code: err.statusCode }); } else { // If it's an unknown error, respond with a 500 status code and a generic error message return res .status(500) .json({ status: "critical", error: "Internal Server Error.", code: 500 }); } }; // Applying the error handling middleware app.use(errorHandler); // create a client connection export const client = redis.createClient({ url: redisURL, }); // on the connection client.on("connect", () => console.log("Connected to Redis")); client.connect(); // Run the swagger docs before log interception swaggerDocs(app, port); // Place an interceptor above all routes that you want to `intercept` app.use(responseInterceptor); /** HTTP GET Request */ app.get("/", (req, res) => { res.status(201).json("Home GET Request"); }); app.use("/api/v1/user", userRouter); app.use("/api/v1/auth", authRouter); app.listen(port, () => { console.log(` ########################################### Server is currently running at port ${port} ###########################################`); }); Testing and Product Demonstration Your app is now ready for testing. Before you start testing on WhatsApp, you can proxy the localhost server with ngrok to the internet by running the below command on another tab on your terminal. ngrok http 5000 Remember having ngrok installed and authenticated is one of the prerequisites to starting this tutorial. You will get a response like this with your public app address hosted on ngrok. Your server is now up and running. Once the app is running, open your WhatsApp and send join <sandbox code> first in order to establish a connection to the sandbox. When you have established a connection, you can go ahead to make a sample request to your API. In this case, I created a user and tried to log in with the wrong credentials to get the alert. Below is a demonstration of how it works: Navigate to ngrok Forwarding URL and append /docs to open up the Swagger UI. Register a user using the /signup endpoint on the /api/v1/auth route Sign in or Log in using the wrong credentials to get a 400 status code. If it fails, try to copy the shown curl request on another tab on your terminal. Get the error alert on the WhatsApp number set as to in the Twilio function. Conclusion If you followed till this point, congratulations! You have been able to use Twilio's powerful communication suites to build a handy incidence alert service. This service can be extended to many other possibilities like getting daily summaries on each endpoint with regard to your API health etc. You can also automate /slash commands that create a GitHub issue for any incident critical enough. You can learn more about using Twilio WhatsApp API in a production environment by referring to Twilio’s documentation. Desmond Obisi is a software engineer and a technical writer who loves developer experience engineering. He’s very invested in building products and providing the best experience to users through documentation, guides, building relations and strategies around products. He can be reached on Twitter, LinkedIn or through my mail desmond.obisi.g20@gmail.com.
Twilio Conversations is a powerful communication platform that enables businesses to connect with their target audiences seamlessly, spanning various channels such as SMS, MMS, WhatsApp, web, and mobile chat. However, when it comes to start-up companies with limited resources, effectively serving a diverse and multilingual audience can pose challenges – particularly due to language barriers. In this article, you’ll solve this problem by building a chat application that enables real-time chat translations. You will create a dynamic chat experience where messages are automatically translated to suit the default language of respective clients. By leveraging the capabilities of Twilio Conversations, along with Flask as the backend framework and DeepL API for language translation, you can provide a seamless and efficient communication solution for businesses operating in multilingual environments. Prerequisites Before proceeding with the tutorial, you should meet the following requirements: Python3.7+ installed. MongoDB Server installed. Some understanding or willingness to learn the Flask web framework and Jinja templating engine. Text editor. Package managers (NPM & PIP). A Twilio account. If you don't have one, you can create a free account here. A DeepL account. If you don’t have one, you can create a free account here. Project structure You'll be building an application that fuses frontend and backend technologies to produce a fully functioning web chat. To get started, clone or download the starter files from this GitHub repository: startkit-flask-twilio-deepl. Or, download the full project in this GitHub repository. The repository contains the following folders and files: templates - This folder contains all HTML files used for the project. Each file typically contains HTML code with Tailwind utility classes for styling. There are Jinja2 codes for link building and passing variables from the backend to the front end. static - contains CSS styles generated by Tailwind. requirements.txt - Contains a list of Python dependencies. package.json - Contains a list of frontend dependencies and a script for starting the Tailwind build process. README.md - Provides information on how to run the app. Create a virtual environment Before you begin coding, you need to set up your development environment. Start by navigating to the GitHub repository you cloned and creating a new virtual environment. cd startkit-flask-twilio-deepl Install virtualenv if it's not installed already. pip install virtualenv Create a virtual environment: virtualenv venv Activate the virtual environment with the following command: source venv/bin/activate Virtual environments are a great way to isolate project dependencies to avoid conflicts. Build out the backend of the chat application With your virtual environment activated, you can safely install Python dependencies for the project. As mentioned earlier, the starterkit repository contains a file that contains a list of Python dependencies that are required for the project. Install packages listed in requirements.txt with the command: pip install -r requirements.txt Here’s a breakdown of the installed dependencies: flask: Flask is a popular web framework for Python. It provides a simple and efficient way to build web applications. flask-login: Flask-Login is an extension for Flask that handles user authentication and session management. It simplifies managing user logins, logouts, and user sessions in Flask applications. flask-wtf: Flask-WTF is an extension for Flask that integrates Flask with the WTForms library. WTForms is a flexible form validation and rendering library for Python. Flask-WTF simplifies the process of creating and handling web forms in Flask applications. flask-pymongo: Flask-PyMongo is also an extension for Flask that provides integration with the PyMongo library, which is a Python driver for MongoDB. It allows Flask applications to interact with Mongo servers easily. twilio: Twilio allows you to send SMS messages, make phone calls, and perform various other communication tasks. python-dotenv: python-dotenv is a Python library that helps in managing application configurations stored in a .env file. It allows you to define environment variables in a .env file and load them into your Python application easily. deepl: DeepL is a client library for the DeepL API, allowing you to integrate DeepL translation functionality into your Python applications. Next, create a .env file in the project directory to safely store secret keys and tokens for third-party authentication. Add the following lines to .env: TWILIO_ACCOUNT_SID=<your-twilio-account-sid> TWILIO_AUTH_TOKEN=<your-twilio-auth-token> TWILIO_API_KEY_SID=<your-twilio-api-key-sid> TWILIO_API_KEY_SECRET=<your-twilio-api-key-secret> FLASK_SECRET_KEY=<your-flask-secret-key> DEEPL_AUTH_KEY=<your-deepl-auth-key> Here, you’ll assign credentials obtained from Twilio and DeepL to environment variables which you’ll use in later sections of the tutorial. The FLASK_SECRET_KEY is a random string that you can generate at your discretion to secure your Flask app. For example, a Flask secret key could be "xPIOKah0mW", or something else you can generate with a random string generator. Set up the frontend of the chat application The starter folder contains the HTML templates and styling used to create this tutorial. However, in order to generate Tailwind styles, some packages need to be installed. Install frontend dependencies from the package.json file by navigating to the root of the project directory and run the following commands: text npm install Run Tailwind CSS by building with this command text npm run build The package.json file contains a build script that runs a command that starts the Tailwind build process. Set up the Flask app With the development environment set up, you can move ahead to create your base Flask app. Create an app.py file in the root of the project directory and write the following code: """ Base Flask Application """ from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "Hello, World!" if __name__ == "__main__": app.run() Run the Flask app from your terminal using the command: flask run --debug Here’s a screenshot of the running app on my browser. If no errors are encountered at this point, you can move ahead to configure MongoDB to work with your Flask app. Configure your database Start by creating a file named db.py at the root of the project directory (i.e. startkit-flask-twilio-deepl/db.py). Add the code below to the Python file: from flask_pymongo import PyMongo mongo = PyMongo() Here, the PyMongo class was imported from flask_pymongo and assigned to a variable named mongo. flask_pymongo is a wrapper for Pymongo’s MongoClient. It makes connecting to a Mongo server more convenient, and it provides some helper functions as well. Next, you can import the mongo client and other necessary classes and libraries in app.py. In app.py: """ Base flask application """ import os from uuid import uuid4 from deepl import Translator from flask import Flask, render_template, request, jsonify from flask_login import LoginManager, current_user from dotenv import load_dotenv from db import mongo, User from bson import ObjectId # custom modules that will be created shortly from auth.customer import blp as customer_blp from auth.customer_rep import blp as rep_blp from conversations.twilio_chat import blp as chat_blp load_dotenv() app = Flask(__name__) # base flask config app.config["MONGO_URI"] = "mongodb://localhost:27017/webchat" app.secret_key = os.getenv("FLASK_SECRET_KEY") # initialize mongodb mongo.init_app(app) # authenticate deepl translator = Translator(os.getenv("DEEPL_AUTH_KEY")) @app.route("/") def index(): return "Hello, World!" if __name__ == "__main__": app.run() The updated code imports necessary modules and classes from different libraries, including the Mongo client from db.py. In the code, load_dotenv() is called to give your application access to the environment variables defined in the .env file created earlier. The MongoDB connection URI is set in the Flask app's configuration using app.config["MONGO_URI"]. By default, the local MongoDB server is accessible on "localhost:27017". The connection string instructs the FLask app to connect to a local MongoDB server on the default port and use a database named "webchat." Also, a secret key for Flask sessions is set from the environment variable FLASK_SECRET_KEY using os.getenv("FLASK_SECRET_KEY"). Next, the MongoDB connection is initialized using mongo.init_app(app), where mongo is an instance of Flask-PyMongo used to interact with MongoDB. Then, an instance of the DeepL Translator class is created using Translator(os.getenv("DEEPL_AUTH_KEY")). Where DEEPL_AUTH_KEY is retrieved from the environment variable. This allows your app to use the DEEPL translation service. Create collections in DB Collections in MongoDB are similar to tables in relational databases. The web chat will use two collections to store data for customers and customer representatives. You could choose to use one collection, that’ll work fine. I prefer splitting them. The collections will have the structures below: username: <string> password: <string> role: <string> language: <string> chat_id: <string or Null> The language attribute will store acronyms for languages supported by DeepL. The list of supported languages will be available for customers to choose from when they sign up. If you’re working on a Linux machine you can start your Mongo server with the command below: sudo systemctl start mongod Next, start the Mongo shell: mongosh Switch database with the use <DATABASE_NAME_HERE> and create collections: Create collections for customer and customer_rep with the commands: db.createCollection("customer") db.createCollection("customer_rep") For the sake of this tutorial, I won’t be creating an endpoint for signing up customer reps. Instead, you will create a record for a customer representative in the database using the Mongo shell. Use the command below: db.customer_rep.insertOne({ username: "admin", password: "password", language: "EN-US", role: "customer_rep", chat_id: null }) I didn’t hash the password for the customer representative profile you created. This is a bad practice that should be avoided, as only hashed passwords should be stored in a database. But, you can break the rules a bit for testing purposes. However, you will hash passwords when creating customer objects. Define a user object The User object serves as a convenient way to access and manipulate information about a logged-in user. By storing user information in an instance of the User class, you can easily retrieve specific attributes like the username or role when needed. By doing so, you avoid constantly reading from the database to fetch certain details. This object can be utilized in various parts of your application that require user-related functionality, such as translating texts based on a user’s language. Add the following lines to db.py: from flask_pymongo import PyMongo from flask_login import UserMixin mongo = PyMongo() class User(UserMixin): """ Models a user """ def __init__(self, user_data): self.id = str(user_data["_id"]) self.username = user_data["username"] self.password = user_data["password"] self.language = user_data["language"] self.role = user_data["role"] self.chat_id = user_data["chat_id"] In the updated code, the UserMixin class is imported from the flask_login module and inherited by the User class. This inheritance allows the User object to be compatible with Flask-Login. The User class represents a user in the application. It has an __init__ method that takes user data as input and initializes various attributes such as id, username, password, language, role, and chat_id. These attributes are assigned values based on the corresponding fields in the user_data dictionary. Implement form validations The web chat features sign-up and login forms for user authentication. You'll be defining some rules for form fields using Flask-WTF to ensure a user submits forms with the required fields. Create a file named validations.py in the project directory. Add the following lines of code: from flask_wtf import FlaskForm from wtforms import StringField, PasswordField, SelectField from wtforms.validators import InputRequired, equal_to class SignupForm(FlaskForm): """ Validations for signup form """ username = StringField( label="Username", validators=[InputRequired(message="Username cannot be left blank")] ) password = PasswordField( label="Password", validators=[InputRequired(message="Password cannot be left blank")], ) language = SelectField( "Language", choices=[ "BG", "CS", "DA", "DE", "EL", "EN-US", "EN-GB", "ES", "ET", "FI", "FR", "HU", "ID", "IT", "JA", "KO", "LT", "LV", "NB", "NL", "PL", "PT", "RO", "RU", "SK", "SL", "SV", "TR", "UK", "ZH", ], ) confirm_password = PasswordField( label="Confirm Password", validators=[ InputRequired(message="Password cannot be left blank"), equal_to("password", message="passwords do not match"), ], ) class LoginForm(FlaskForm): """ Validation for login form """ username = StringField( label="Username", validators=[InputRequired(message="Provide a username")] ) password = PasswordField( label="Password", validators=[InputRequired(message="Password cannot be left blank")], ) The code above defines two Flask forms: SignupForm for user registration and LoginForm for user login. SignupForm includes fields for username, language, password, and confirm_password, all of which have validation rules such as input required and matching password confirmation. LoginForm includes fields for username and password, both with input required validation. These forms help ensure that the submitted data meets the specified requirements before further processing or authentication. Implement authentication blueprints Flask blueprints are a way to organize and structure applications into reusable components. They provide a means to define and group routes, views, templates, and static files related to a specific feature or module of your application. You'll implement blueprints to handle authentications for Customers and Customer reps, allowing you to customize the URL prefix for each blueprint. First, you’ll create a subdirectory named auth in the project folder. In this subdirectory, create two files; customer.py and customer_rep.py. Write the following code flask-twilio-deepl/auth/customer.py: from flask import Blueprint, request, render_template, redirect, url_for, flash from db import mongo, User from validations import LoginForm, SignupForm from werkzeug.security import generate_password_hash, check_password_hash from pymongo.errors import WriteError from flask_login import logout_user, login_user, login_required blp = Blueprint("customer", __name__, url_prefix="/auth/customer") The code above imports the Blueprint class from the Flask framework and other Python packages. A new instance of the Blueprint class named blp was also created. Here’s more detail about the arguments passed in the class constructor: customer is the unique identifier of the customer blueprint. It is used to differentiate the blueprint you defined from others when registering it with the Flask application. __name__ is a special Python variable that represents the name of the current module. It is typically passed as the second argument to the Blueprint constructor to ensure that Flask knows where to find the blueprint resources. The url_prefix="/auth/customer" argument specifies the URL prefix that will be applied to all routes defined within this blueprint. In this case, the blueprint's routes will be prefixed with /auth/customer, meaning that any routes defined in this blueprint will be accessible at URLs like /auth/customer/<route_name>. Next, you'll implement endpoints for customer sign-up and login. Add the following lines to customer.py: @blp.route("/register", methods=["POST", "GET"]) def register(): """ Create customer account """ form = SignupForm() if request.method == "POST" and form.validate_on_submit(): username = request.form.get("username") language = request.form.get("language") password = request.form.get("password") password_hash = generate_password_hash(password) # check if username exists in database user = mongo.db.customer.find_one({"username": username}) if user: flash("User already exists") return render_template("signup.html", form=form) try: mongo.db.customer.insert_one( { "username": username, "password": password_hash, "language": language, "role": "customer", "chat_id": None, } ) return redirect(url_for("customer.login")) except WriteError: flash("Error creating account.") return render_template("signup.html", form=form) return render_template("signup.html", form=form) The code above implements a route within the customer blueprint for customer registrations. This route is accessible at /auth/customer/register and supports both GET and POST methods. To ensure that users provide the necessary data during registration, the function utilizes the SignupForm object to enforce form validation. If the request method is POST and the form data passes validation (form.validate_on_submit()), the function proceeds to extract the username, language, and password from the form data. The extracted password is hashed using the generate_password_hash() function. Run a query to the MongoDB database to check if a user with the same username already exists. If a user with the same username exists, a response is returned with the message "User already exists." If the username is unique, a new document is inserted into the "customer" collection in the MongoDB database. This document contains the username, hashed password, language, role, and chat ID fields. The function then redirects the user to the customer.login route. If the request method is GET or the form validation fails, the function renders the signup.html template and passes the form object to the template for further processing. Next, you’ll implement endpoints to handle customer login and logout. Add the code below to customer.py file: @blp.route("/login", methods=["POST", "GET"]) def login(): """ Login customer """ form = LoginForm() if request.method == "POST" and form.validate_on_submit(): username = request.form.get("username") password = request.form.get("password") user = mongo.db.customer.find_one({"username": username}) if user and check_password_hash(user["password"], password): login_user(User(user)) return redirect(url_for("index")) else: flash("Username/Password incorrect", "error") return render_template("login.html", form=form) return render_template("login.html", form=form) @blp.route("/logout") @login_required def logout(): """ endpoint to clear current login session """ logout_user() return render_template("index.html") The code above defines two routes: /login and /logout. Inside the login() function, a LoginForm object is instantiated. The form.validate_on_submit() method is used to check if the form is submitted and passes the validation rules. If the form is submitted and valid, the code retrieves the username and password from the request's form data. It then queries the MongoDB database to find a customer with a matching username. If a user is found and the hashed password in the database matches the provided password using the check_password_hash() function, the user is considered authenticated. If authentication is successful, the login_user() function is called with a User object to log in the user. The user is then redirected to the index endpoint using the redirect() and url_for() functions. Inside the logout() function, the logout_user() function is called to clear the current user's login session. Then, the index.html template is rendered to display the main page or homepage of the application. The customer blueprint is now complete. Next, you will implement the customer_rep blueprint, which is a lot similar to the customer blueprint. Paste the following code inside the flask-twilio-deepl/auth/customer_rep.py file: from flask import Blueprint, request, render_template, redirect, flash, url_for from db import mongo, User from validations import LoginForm from flask_login import login_user, logout_user blp = Blueprint("rep", __name__, url_prefix="/auth/rep") @blp.route("/login", methods=["POST", "GET"]) def login(): """ Login customer rep """ form = LoginForm() if request.method == "POST" and form.validate_on_submit(): username = request.form.get("username") password = request.form.get("password") customer_reps = mongo.db.customer_rep.find_one({"username": username}) if customer_reps and customer_reps["password"] == password: login_user(User(customer_reps)) users = mongo.db.customer.find() context = {'conversations':[]} for user in users: context['conversations'].append({'username': user['username'], 'chat_id': user['chat_id']}) return render_template("repchats.html", context=context) else: flash("username/password incorrect") return render_template("rep_login.html", form=form) return render_template("rep_login.html", form=form) @blp.route('logout') def logout(): """ Log out user """ logout_user() return redirect(url_for("index")) The code above is similar to what you already have for the customer blueprint. If a matching customer representative is found and the password matches, the user is logged in using login_user(User(customer_reps)), where User is a custom user model that represents the customer representative. The code fetches all customers from the database and creates a context dictionary with conversation information. Then, if the login is successful, the repchats.html template is rendered, passing the context data. If the request method is GET or the form validation fails, the rep_login.html template is rendered, passing the login form. Implement Flask Login Flask-Login will be used for session management. To utilize Flask-Login, update app.py with the code below: """ Base flask application """ import os from uuid import uuid4 from deepl import Translator from flask import Flask, render_template, request, jsonify from flask_login import LoginManager, current_user from dotenv import load_dotenv from db import mongo, User from bson import ObjectId from auth.customer import blp as customer_blp from auth.rep import blp as rep_blp from chat.twilio_chat import blp as chat_blp load_dotenv() app = Flask(__name__) # base flask config app.config["MONGO_URI"] = "mongodb://localhost:27017/webchat" app.secret_key = os.getenv("FLASK_SECRET_KEY") # initialise flask login login_manager = LoginManager(app) # initialise mongodb mongo.init_app(app) # authenticate deepl translator = Translator(os.getenv("DEEPL_AUTH_KEY")) # register blueprints app.register_blueprint(customer_blp) app.register_blueprint(rep_blp) app.register_blueprint(chat_blp) @login_manager.user_loader def load_user(user_id): """ fetch user id for login session """ user_data = mongo.db.customer.find_one( {"_id": ObjectId(user_id)} ) or mongo.db.customer_rep.find_one({"_id": ObjectId(user_id)}) if user_data: return User(user_data) return None # set default login view for protected routes login_manager.login_view = "customer.login" @app.route("/") def index(): """ return template for index page """ if current_user.is_anonymous: return render_template("index.html", user_id="anonymous") else: user = current_user.username id = str(uuid4()) user_id = user + "-" + id return render_template("index.html", user_id=user_id) Notables changes made in app.py include the following: Flask Login Manager Initialization: An instance of LoginManager is created using LoginManager(app) to handle user authentication and session management. Blueprint Registration: Flask blueprints for different functionalities (customer_blp, rep_blp, chat_blp) are registered with the Flask app using app.register_blueprint(). Note that the chat_blp will be implemented in the next section of the tutorial. User Loader Function: The load_user() function is a function used by Flask-Login to load and retrieve user data. It is specifically designated as the user loader function by using the @login_manager.user_loader decorator. It fetches user data from the MongoDB database based on the provided user_id and returns a User object representing the user. Default Login View: The default login view for protected routes is set to "customer_blp.login" using login_manager.login_view. It ensures that unauthenticated users are redirected to the customer login page. Index Route: The "/" route is defined using @app.route("/") and the associated function. It renders the index.html template using render_template(). If the current user is anonymous (not logged in), the template is rendered with the user_id set to "anonymous". Otherwise, the template is rendered with a user_id generated using a combination of the username and a unique identifier. Set up Flask app to use Twilio Conversations In this section, you'll configure your Flask app to use Twilio’s Python SDK to: Create and fetch conversations. Create and add participants to conversations. Generate a token for the Twilio Conversations client library. You’ll start by creating a folder named conversations in your project directory. Breaking the code down into multiple folders helps to keep things organized. This can be done using a GUI or your terminal as shown in the code below. mkdir conversations Next, you’ll create a Python file named twilio_chat.py inside the conversations folder. This file will serve as a blueprint for implementing Twilio conversations on the backend. Add the following lines to twilio_chat.py: import os from twilio.jwt.access_token import AccessToken from twilio.jwt.access_token.grants import ChatGrant from twilio.rest import Client from twilio.base.exceptions import TwilioException from flask_login import login_required from flask import Blueprint, render_template from flask_login import current_user from bson import ObjectId blp = Blueprint("chat", __name__, url_prefix="/chat") account_sid = os.getenv("TWILIO_ACCOUNT_SID") auth_token = os.getenv("TWILIO_AUTH_TOKEN") client = Client(account_sid, auth_token) The os library is imported to access environment variables. The Twilio Rest library to access the Client object, and a Twilio Exceptions library for error handling. Next, we declare the module as a blueprint named chat with a URL prefix set as /chat. Then, environment variables are loaded with the environment file created previously. The values of the Twilio credentials are assigned to the variables account_sid and auth_token. These variables are then used to initialize the Twilio Client constructor and assign the created class instance a variable named client. Create helper function In this module, you’ll be generating access tokens for customers and customer reps. Having a helper function to handle this task will reduce the number of repeated codes in your Python script. Append the code in twilio_chat.py with the following lines: def generate_access_token(identity, service_sid): """ Generates access token Args: identity - identity of conversation participant service_sid - unique ID of the Conversation Service Return: jwt encoded access token """ twilio_account_sid = os.environ.get("TWILIO_ACCOUNT_SID") twilio_api_key_sid = os.environ.get("TWILIO_API_KEY_SID") twilio_api_key_secret = os.environ.get("TWILIO_API_KEY_SECRET") token = AccessToken( twilio_account_sid, twilio_api_key_sid, twilio_api_key_secret, identity=identity, ) token.add_grant(ChatGrant(service_sid=service_sid)) return token.to_jwt() Here, you’ve declared a function named generate_access_token that accepts two arguments; identity and service_sid. identity refers to the identifier associated with a particular participant of a conversation. The service_sid is the unique ID of the Conversation Service a conversation belongs to. First, the function retrieves the necessary credentials; Twilio account SID, API key SID, and API key secret, from environment variables so that an AccessToken can be created. The function adds a grant to the access token using the add_grant method, which takes a ChatGrant object as an argument that is initialized using the Conversation SID as the service_sid. Then the function returns the access token encoded as a JWT (JSON Web Token). Create conversations for customers As mentioned earlier, the web app will allow customers and customer representatives to engage in isolated conversations. Add the following lines of code to twilio_chat.py: @blp.route("/<string:user_id>") @login_required def conversation(user_id): """Create Twilio conversation""" # check if user exists # if user is active check there is an existing conversation # if yes, retrieve conversation if current_user.is_active and current_user.chat_id: chat_id = current_user.chat_id conversation = client.conversations.v1.conversations(chat_id).fetch() # generate an access token service_sid = conversation.chat_service_sid token = generate_access_token(current_user.username, service_sid) context = { "token": token, "chat_id": conversation.sid, "role": current_user.role, "language": current_user.language, } return render_template("chat.html", context=context) elif current_user.is_authenticated and current_user.chat_id == None: # create conversation try: conversation = client.conversations.v1.conversations.create( friendly_name=user_id ) except TwilioException as err: print("Error:", err) user = current_user.id from db import mongo # add chat_id for current user to database mongo.db.customer.update_one( {"_id": ObjectId(user)}, {"$set": {"chat_id": conversation.sid}} ) try: # add current user to conversation client.conversations.v1.conversations(conversation.sid).participants.create( identity=current_user.username ) except TwilioException as err: print("Error:", err) # generate an access token service_sid = conversation.chat_service_sid token = generate_access_token(current_user.username, service_sid) context = { "token": token, "chat_id": conversation.sid, "role": current_user.role, "language": current_user.language, } return render_template("chat.html", context=context) The route is defined with the endpoint /<string:user_id>, which expects a user_id parameter in the URL path. @login_required decorator ensures that the user needs to be authenticated to access this route. The conversation function checks if the current user is active and has an existing chat_id. If a chat_id exists for the user, it fetches the conversation details from Twilio using the chat_id. An access token is generated for the user using the generate_access_token function, providing the user's username and the service SID from the fetched conversation. Relevant information, such as the access token, conversation SID, user role, and language, is stored in the context dictionary. The chat.html template is rendered with the context passed as an argument. The function also handles cases where the current user is authenticated but has no Twilio conversation associated with it. A new conversation is created using the user's user_id as the friendly name. The conversation sid is stored as the chat_id for the current user in the database. The current user is added as a participant in the conversation that was created. An access token is generated using the same process as mentioned before. The context dictionary is populated with the relevant information, and then the chat.html template is rendered with the context passed as an argument. The conversation endpoint ensures that a new conversation instance is created or an existing conversation is retrieved whenever a customer clicks on the Start Conversation button that will be provided on the frontend. Based on this design approach, only customers can initiate conversations. Customer representatives only have a list of available conversations which they can join as participants. Add customer representatives to the Twilio conversation To add a customer representative to a conversation you will create a different endpoint that you can call join_conversation. Add the following lines to twilio_chat.py to implement this: @blp.route("/support/<string:chat_id>") def join_conversation(chat_id): """ Retrieve all available conversations """ if current_user.is_authenticated and current_user.chat_id == None: conversation = client.conversations.v1.conversations(chat_id).fetch() participants = client.conversations.v1.conversations( conversation.sid ).participants.list() user = None # check if current user is a participant of the conversation for participant in participants: if participant.identity == current_user.username: user = participant break if user is None: try: client.conversations.v1.conversations( conversation.sid ).participants.create(identity=current_user.username) except TwilioException as err: print("Error:", err) # generate an access token service_sid = conversation.chat_service_sid token = generate_access_token(current_user.username, service_sid) context = { "token": token, "chat_id": conversation.sid, "role": current_user.role, "language": current_user.language, } return render_template("chat.html", context=context) The join_conversation endpoint adds a customer representative to an existing Twilio conversation. First, it checks if a customer rep is already a participant of the conversation accessed. If no record exists, the current user is added as a participant. However, if a record is found, an access token is generated for the current user and the chat page is rendered on the client side. The route is defined with the endpoint /support/<string:chat_id>, which expects a chat_id parameter in the URL path. The function checks if the current user is authenticated and doesn't have a chat_id. It fetches the details of the specified conversation from Twilio using the chat_id and retrieves the list of participants associated with the conversation. The code then iterates through the participants to check if the current user is already a participant in the conversation by comparing the usernames. If the user is not found among the participants, they are added to the conversation using the participants.create() method. An access token is generated for the user using the generate_access_token function, providing the user's username and the service SID from the fetched conversation. The relevant information, such as the access token, conversation SID, user role, and language, is stored in the context dictionary. Then, the chat.html template is rendered with the context passed as an argument. Configure Twilio conversations on the client side You’ll be working with HTML templates for this section of the tutorial. Navigate to your templates folder and open the chat.html. This file will handle the rendering messages between customers and customer representatives. To get started, you’ll copy and paste the Twilio Conversations client library CDN script before the `</body> in the HTML template. <script src="https://media.twiliocdn.com/sdk/js/conversations/v2.4/twilio-conversations.min.js"></script> This approach sets a global Twilio.Conversations object in the browser, allowing you to instantiate the Client class with the access key generated on the backend. Next, add the following lines of code below the CDN script tag: <script> const token = "{{ context.token }}"; const chat_id = "{{ context.chat_id }}"; const role = "{{ context.role }}"; const language = "{{ context.language }}"; const client = new Twilio.Conversations.Client(token); let conv; client.on("initialized", () => { console.log("Client initialized successfully"); // Use the client. }); // To catch client initialization errors, subscribe to the `'initFailed'` event. client.on("initFailed", ({ error }) => { // Handle the error. console.log(error); }); </script> The code above sets up the client-side JavaScript code to initialize the Twilio Conversations client using the access token. It handles the initialization success and failure events by logging messages to the console. The role variable represents the role of the user in the chat (e.g., customer or customer representative). The language variable holds the language associated with the user (e.g., the default language for translation purposes). A new instance of the Twilio Conversations client is created using the access token stored in the token variable. An event listener is set up for the initialized event of the client. When the client is successfully initialized, the callback function is executed, and a log message is printed to the console. Another event listener is set up for the initFailed event of the client. If an error occurs during client initialization, the callback function is executed, and the error message is logged to the console. Set up chat translations with DeepL In the previous section, you’ve initialized the Twilio Conversations client using an access token generated from the backend. If the conversation is initialized successfully, a success message will be logged to the console, else, an error message is logged to the console. Receive and process translation requests Text translations will be executed on the server side. That means when a message is sent by participants of a conversation, the messages are intercepted and sent to an endpoint on the server side for translation, then the translated text is returned to the client and rendered to the screen. First, create an endpoint to listen for incoming translation requests. Add the code below to app.py: @app.route("/translate", methods=["POST"]) def translate_text(): """ Translate chat with DEEPL client library """ request_data = request.get_json() input_text = request_data["text"] target_lang = request_data["target_lang"] response = translator.translate_text(text=input_text, target_lang=target_lang) response_text = response.text return jsonify({"response_text": response_text}) The code above defines a route /translate that listens for POST requests. When a request is received, the translate_text() function is executed. When a POST request is received, the function retrieves the request data using request.get_json(). The request data is expected to be in JSON format. The JSON object received is expected to contain two keys: text and target_lang. text represents the input text to be translated, and target_lang represents the language to which the text should be translated. The function uses the previously initialized DeepL client library to perform the translation. It calls a method named translate_text() from the translator object, passing the input text and target language as arguments. Then, the translated text is extracted from the response object using the text attribute and assigned to the response_text variable. Furthermore, the translated text is returned as a JSON response using the Flask jsonify() function. The response is in the format {"response_text": response_text} where response_text contains the translated text. Send translation requests As stated earlier, messages exchanged between participants of a conversation will be intercepted on the client-side and translated. To translate texts, a POST request will be initiated from the browser. In your templates folder, open up chat.html and add the following lines within the <script> tag: // send input text for translation async function callTranslateAI(text, targetLang) { const response = await fetch("/translate", { method: "POST", headers: { "Content-type": "application/json", }, body: JSON.stringify({ text: text, target_lang: targetLang }), }); const json = await response.json(); return json; } The code implements a Javascript function named callTranslateAI(). It is an asynchronous function that sends a request to the /translate endpoint for text translation using DeepL. callTranslateAI takes two parameters: text and targetLang. text represents the input text that needs to be translated, and targetLang represents the language to which the text should be translated. Inside the function, a POST request is made using the fetch() function to the /translate endpoint of the running Flask app. The POST request consists of a "Content-type" header that is set to "application/json" to specify that the request body is in JSON format. The JSON.stringify() function is used to convert an object containing the text and target_lang properties into a JSON string. This JSON string is set as the request body. After sending the request, the function waits for the response using the await keyword. This indicates that it will wait for the response to be received before proceeding. Once the response is received, the function uses the json() method to parse the response body as JSON. The parsed JSON is stored in the json variable. Next, the function returns the parsed JSON, which is expected to contain the translated text in the property "response_text". Render messages to the screen The next piece to implement is the displaying of messages on the screen. To render messages to the screen, you need to create a new conversation object or fetch an existing conversation and access the messages associated with the conversation object. Since conversation objects are created on the server side, you will fetch a conversation using the Twilio Conversations client. Add the following code to the initialized event listener within the <script> tag: client.on("initialized", () => { console.log("Client initialized successfully"); // Use the client. client .getConversationBySid(chat_id) // fetch conversation using the conversation ID .then((conversation) => { if (conversation) { conv = conversation; // if conversation exits, fetch previous messages conversation.getMessages().then((msgs) => { // render messages to the screen renderMessages(msgs.items); }); // listen for incoming messages and render to screen conversation.on("messageAdded", messageAdded); } else { console.log("Conversation not found"); } }) .catch((error) => { console.error("Error fetching messages:", error); }); }); Once the client is successfully initialized, the getConversationBySid() method is invoked by passing a conversation ID (chat_id) as an argument. This method will fetch the conversation associated with the provided ID. If the conversation exists, the conversation object is assigned to the conv variable and the messages associated with the conversation are fetched using the getMessages() method. The retrieved messages are then passed to a function called renderMessages() to render them on the screen. Additionally, the messageAdded event handler from the Conversations API listens for incoming messages. When a new message is received, the function messageAdded is invoked to handle the event and render the new message on the screen. If the conversation does not exist, the code logs a message to the console indicating that the conversation was not found. In case of any errors during the process, the code uses catch() to handle and log the error to the console. Create a function to send messages To render messages to the screen, you must first be able to send messages. In the HTML template, a Javascript function is called to listen for key up events. This will be useful to send messages when a user hits "Enter". Add the following code to the <script> in chat.html: const onSubmit = (ev) => { if (ev.key !== "Enter") { return; } const input = document.getElementById("large-input"); if (conv) { conv.sendMessage(input.value); input.value = ""; } else { console.log("Conversation not found"); } }; The code above defines a function named onSubmit(). As mentioned, it is an event handler function triggered when a user presses a key. First, the function checks if the key pressed is not equal to "Enter" (ev.key !== "Enter"). If it's not the "Enter" key, the function returns early and does nothing. If the "Enter" key is pressed, the function proceeds to get the value of an input element with the ID value of "large-input" using document.getElementById("large-input"). The value is stored in the input variable. Then, the code checks if a conversation object (conv) exists. If it does, it means that a conversation has been previously fetched. In that case, the sendMessage() method of the conversation object is called and passed the input.value as the message content. After sending the message, the input.value is set to an empty string, clearing the input field for the next message. If a conversation object does not exist, the function logs a message to the console indicating that the conversation was not found. Display stored messages When a conversation is fetched using the client object, messages associated with a particular conversation are accessed and displayed on the screen. Add the following lines of code within the <script> : async function renderMessages(messages) { const messageLog = document.getElementById("message-log"); messageLog.innerHTML = ""; // Clear the message log for (const msg of messages) { let translatedMessage; if (role === "customer_rep") { // translate message for customer rep translatedMessage = await callTranslateAI(msg.body, language); } else if (role === "customer") { // translate message for customer rep translatedMessage = await callTranslateAI(msg.body, language); } const messageDiv = document.createElement("div"); messageDiv.innerHTML = `<b>${msg.author}</b>: ${translatedMessage["response_text"]}`; messageLog.appendChild(messageDiv); } } The code above implements an asynchronous function named renderMessages(), which takes an array of messages as input. Inside the function, it starts by selecting an HTML element with the ID "message-log" using document.getElementById("message-log") and assigns it to the messageLog variable. This element is a container where the rendered messages will be displayed. The next line messageLog.innerHTML = ""; clears the content of the messageLog element, ensuring a fresh start for rendering messages. Then, a for...of loop is used to iterate over each msg in the messages array. For each message, a translatedMessage variable is declared. Based on the value of the role variable, there are two possible scenarios: If role is equal to "customer_rep", it means the user is a customer representative. In this case, the message body (msg.body) is passed to the callTranslateAI() function along with a language variable to translate the message using DeepL API. The translation is awaited and the result is stored in the translatedMessage variable. If role is equal to "customer", it means the user is a customer. The process is the same as above, and the message body is translated using the callTranslateAI() function. After the translation is obtained, a new <div> element is created using document.createElement("div") and assigned it to the messageDiv variable. The inner HTML of the messageDiv is set to a formatted string that displays the message author (msg.author) in bold and the translated message text (translatedMessage["response_text"]). Then, the messageDiv is appended as a child to the messageLog element using messageLog.appendChild(messageDiv), effectively rendering the translated message on the screen within the message log container. Display new message Displaying new messages on screen follows the same approach as the renderMessages function described above. Add the following lines of code to the script tag, below renderMessages: // Translate texts in real-time async function messageAdded(msg) { const messageLog = document.getElementById("message-log"); const messageDiv = document.createElement("div"); let translatedMessage; if (role === "customer_rep") { // translate message for customer rep translatedMessage = await callTranslateAI(msg.body, language); } else if (role === "customer") { // translate message for customer rep translatedMessage = await callTranslateAI(msg.body, language); } messageDiv.innerHTML = `<b>${msg.author}</b>: ${translatedMessage["response_text"]}`; messageLog.appendChild(messageDiv); } The code above defines an asynchronous function named messageAdded() that acts as an event handler for adding real-time messages to a conversation. It translates the message body based on the user's role (customer or customer representative) using DeepL. The translated message is displayed on the screen within the message log container. With all of that done, you can move on to testing your app. Test your multilingual web chat To test your app, you need to work with two browser windows. But first, you must ensure your Flask app and local Mongo server are up and running. Also, you need to ensure your PC is connected to a wifi network. Open up a terminal window and run the Flask app with the command below: flask run --debug Open up another terminal and run Tailwind CSS. This will ensure CSS styles are generated for the templates: npm run build Also, you can check the status of your Mongo server with the command: sudo systemctl status mongod If there are no errors, open up your browser and visit the address: http://127.0.0.1:5000 You should see this: Clicking on the Start Conversation button will take you to the login page (because it’s a protected route that’s only accessible to authenticated users). Next, visit the sign-up endpoint to create a user account for testing the app. Go to the address: http://127.0.0.1:5000/auth/customer/register The language field sets the default language for text translations for a user. After creating a user account, you’ll be redirected to a login page to sign in. After signing in, you’ll be redirected to the index page. On the index page, click on the Start Conversation button to open up a chat screen. Next, open up another browser window to sign in to account for your customer rep. No need to create an account for this user (you created it from the Mongo shell): Follow the route: http://127.0.0.1:5000/auth/rep/login Use the credentials: Username = admin Password = password When successfully signed in, you’ll find a list of available conversations: In my case, there are two available conversations because I created a customer profile while testing. On your screen, you should see the conversation you created. Click on the name of the user you created to open the conversation screen. Finally, start making conversations. Send messages in the default language of the user. It’ll be translated to the default language of the receiver. If you encountered any issues or errors while testing the code, download the full project folder from this GitHub repository. When a profile for the customer representative was created, the default language was set to "US-EN". This implies that all messages on the screen of the customer representative will be translated into American English. What's next for multilingual chat applications? Congratulations! Your multilingual web application is up and running. Currently, the app always translates all messages stored in the Message object. This is an expensive operation because DeepL charges you for each character translated. A possible solution is to implement an in-memory caching system on the client side which will store translated messages and retrieve them once the page is loaded. This way, only new messages will be translated and the translated message will be cached. That said, I hope you enjoyed this tutorial. Twilio offers many other awesome services and we’ll explore them in other tutorials. Till then! Nicholas is a versatile software engineer proficient in Flask/Python and NodeJS. With a passion for learning, he constantly seeks out new technologies and skills to expand his knowledge. Connect with him on LinkedIn.
Header photo credits to SantosSocialClub. What better way to unite attendees and engage with fans before a music festival than with a lively and engaging music trivia contest? Using Twilio's SMS API, you can quickly develop and host a music trivia game that is sure to keep your guests interested and delighted whether you're holding a small gathering or a big festival. This article will go over the procedures for building a Twilio-enabled music trivia game for your festival group, including how to set up the game, develop the questions, and oversee the game while it is taking place, using Twilio’s Programmable API Suite. So, get ready to make your festival even more memorable with a fun and exciting music trivia game powered by Twilio! In this music trivia game, participants respond to music-related questions. Questions on musicians, songs, albums, genres, and other relevant topics could be asked of the participants. The game is won by the person who correctly answers the most questions. Prerequisites IDE such as IntelliJ IDEA. A basic understanding of Java and Spring Boot or willingness to learn. An understanding of MySQL databases or willingness to learn. Postman to test the game. Set up the project You need to ensure that your project is set up properly before starting to create the app. Start by creating a new Spring Boot project using the Spring Initializr. In the Project selection menu, select "Java" and then "Maven." Select Spring Boot 3.0.5 as it is the most recent and stable version as of this article's authoring. The application Artifact should be renamed to something such as "musiktrivia" in the Project Metadata. This name will be used as the display title and entrance title for the project. Include a project description if you think it will help you plan more effectively. Please note that, if you want to have the same import statements as I do, you would have to use the same name in the Project Metadata. This would help you achieve import conventions such as: import com.twilio.trivia.x With Java 17 and the built-in Tomcat server that Spring Boot offers, choose "Jar" as the packaging type for the application. Click the Generate button at the bottom of the page and the browser will download a .zip file with the project's boilerplate code. Now you can launch your IDE and open the project. Make sure the pom.xml file for your project contains the following dependencies. To add them to the file, simply copy and paste. <dependencies> <dependency> <groupId>com.twilio.sdk</groupId> <artifactId>twilio</artifactId> <version>7.55.1</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.mysql</groupId> <artifactId>mysql-connector-j</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.20</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>jakarta.persistence</groupId> <artifactId>jakarta.persistence-api</artifactId> <version>3.1.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> </dependencies> Save the modifications to the file. Find the little icon with a "M" shape in the top-right corner of IntelliJ IDEA. Click it to load Maven modifications. Set up the MySQL database The database will be a MySQL database, for storing objects and events of the trivia game app. Please note that you can use any database provider of your choice. Open your favorite SQL Client Tool - in my case, it is the MySQL Workbench. In the workbench, create a table with any name of your choice. In my case, the table is named "twilio-db". Save and exit the tool. Navigate to the resources subfolder in your project directory and open the application.properties file. Paste the following information into it: spring.datasource.url=jdbc:mysql://localhost:3306/twilio-dev?serviceTimezone=UTC spring.datasource.username=<YOUR_DATABASE_USERNAME> spring.datasource.password=<YOUR_DATABASE_PASSWORD> spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect spring.jpa.hibernate.ddl-auto=update spring.datasource.url specifies the URL for connecting to the MySQL database. In this case, it is set to jdbc:mysql://localhost:3306/twilio-db?serviceTimezone=UTC. Here, jdbc:mysql://localhost:3306 indicates that the application will connect to a MySQL database running on the local machine (localhost) on port 3306. /twilio-db is the name of the database being used, and ?serviceTimezone=UTC sets the timezone to UTC. spring.datasource.username specifies the username used to authenticate with the MySQL database. spring.datasource.password specifies the password used to authenticate with the MySQL database. spring.datasource.driver-class-name specifies the JDBC driver class name for MySQL. In this case, it is set to com.mysql.cj.jdbc.Driver, which is the driver class for MySQL. spring.jpa.properties.hibernate.dialect specifies the Hibernate dialect to be used with MySQL. It is set to org.hibernate.dialect.MySQL5Dialect, which indicates the use of the MySQL 5 dialect. spring.jpa.hibernate.ddl-auto configures the behavior of Hibernate's schema generation tool. Setting it to "update" means that Hibernate will attempt to update the database schema automatically based on the entity mappings. If the schema doesn't exist, it will be created. If it exists, Hibernate will update it to match the entity mappings, but existing data will not be modified. Your database is now set up and ready for development. Initialize the port By default, Spring Boot projects run on port 8080. However, you can change this port to a value of your choice. In the case of this tutorial, we set the port to 9090. Here’s how to do it in the application.properties file. Paste the following text into the file: server.port=9090 Create project models To interact with the trivia game, a number of entities are required. Essentially, there has to be people playing the game. There are also questions to be answered and a way to keep track of the score so that an eventual winner can be determined. Create the User Model Under the model subfolder, create a file called User.java. The code below should be copied and pasted into the newly generated file. This model stores information about each user playing the game, including their name, score, and any other relevant data. import jakarta.persistence.*; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; @Entity @Getter @Setter @AllArgsConstructor @NoArgsConstructor public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(nullable = false) private String name; @Column(nullable = false) private String phoneNumber; } @Getter and @Setter: These annotations are part of the Lombok library and automatically generate getter and setter methods for the private fields in the class. In this case, it generates getters and setters for the fields id, name, and phoneNumber. These methods allow accessing and modifying the values of the class attributes. @AllArgsConstructor: This annotation is also from the Lombok library and generates a constructor that accepts arguments for all the fields in the class. In this case, it generates a constructor with parameters for id, name, and phoneNumber. This constructor allows creating an instance of the class with all the necessary data. @NoArgsConstructor: This annotation, also from Lombok, generates a constructor with no arguments. In this case, it creates a constructor without any parameters. This constructor allows creating an instance of the class without providing initial values for the fields. It can be useful in certain scenarios when you want to create an empty instance and later set the values using the generated setters. The class itself has the following fields: name, phoneNumber: These fields represent the attributes of the User entity. They are annotated with @Column(nullable = false), which means that these fields cannot be null when persisting the User object in the database. The name field is of type String, the phoneNumber field is also a String, and the "score" field is an integer. Create the Question model Create a file called Question.java in the model subfolder, just like you did with the user model. Into the newly generated file, copy and paste the following code snippet below. This model stores information about each question in the trivia game, including the question itself, answer options, correct answer, and any other relevant data. import jakarta.persistence.*; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; @Entity @Getter @Setter @AllArgsConstructor @NoArgsConstructor public class Question { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(nullable = false) private String questionText; private int correctAnswer; private String gameId; } questionText: This String field represents the text of a question. It is annotated with @Column(nullable = false), indicating that it cannot be null when persisting the object in the database. correctAnswer: This field represents the index of the correct answer among any four options. It is an integer and does not have any annotations associated with it. The value of this field should be between 1 and 4, corresponding to the options mentioned above. gameId: This field represents the identifier of the game to which the question belongs. It is a String and does not have any annotations associated with it. This field is likely used to associate the question with a specific game in the system. Create the Game model This model would store information about each game session, including the start time, end time, game ID, and any other relevant data. Create a file called Game.java in the model subfolder, copy and paste the following code snippet below. import jakarta.persistence.*; import jakarta.persistence.Id; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; import java.time.LocalDateTime; @Getter @Setter @AllArgsConstructor @NoArgsConstructor @Entity public class Game { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private boolean isOngoing = true; @Column(nullable = false) private LocalDateTime startTime; @Column(nullable = false) private LocalDateTime endTime; } isOngoing: This field indicates that the game, which has just been created, is on-going. When the game ends, this field is set to false, specifying that the game is no longer being played. startTime: This field represents the starting time of an event or activity. It is of type LocalDateTime, which is a class in Java that represents a date and time without any time zone information. The @Column(nullable = false) annotation indicates that this field cannot be null when persisting the object in the database. Therefore, a valid value for startTime must be provided. endTime: This field represents the ending time of an event or activity. It is also of type LocalDateTime and is annotated with @Column(nullable = false). Similar to startTime, this annotation indicates that endTime cannot be null when persisting the object in the database. It is required to provide a valid value for endTime. Create the real-time data sync model Finally, in the same subdirectory where other models have been created, create a file named RealTimeData.java and paste in the code below. This model is responsible for managing the real-time data synchronization between different players in the game, including updating the scores and game progress in real-time as players answer questions. import jakarta.persistence.*; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; @Getter @Setter @AllArgsConstructor @NoArgsConstructor @Entity public class RealTimeData { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long Id; @ElementCollection private List<Long> playerIds = new ArrayList<>(); @ElementCollection private Map<Long, Integer> scores = new HashMap<>(); private String gameId; private int winnerId = 0; } playerIds: This field represents a collection of player emails. It is annotated with @ElementCollection. This annotation indicates that the field is a collection of simple values (in this case, Long values) that will be stored separately from the main entity. It allows for mapping a collection of values without the need for a separate entity. In this case, it represents a list of player IDs stored in the "playerIds" field. The initial value of the field is an empty ArrayList. scores: This field represents a mapping between player IDs (Long) and their respective scores (Integer). It is annotated with @ElementCollection. This annotation signifies that the field is a collection of key-value pairs that will be stored separately from the main entity. In this case, it represents a mapping between player IDs and their scores stored in the scores field. The initial value of the field is an empty HashMap. gameId: This field represents the identifier of the game. It is of type String and does not have any annotations associated with it. This field is likely used to uniquely identify the game associated with the entity. winnerId: This field represents the identifier of the winner of the game Create the CreateQuestionRequest model In the model directory, create a file named CreateQuestionRequest.java and paste the code snippet below in this file. The purpose of this class is to receive requests for creating a question object. import lombok.Getter; import lombok.Setter; @Getter @Setter public class CreateQuestionRequest { private String questionText; private String gameId; private int correctAnswer; } Build the repositories Create four new files under the repository subdirectory: UserRepository.java, GameRepository.java, RealTimeDataRepository.java and QuestionRepository.java. Paste the following code snippets below in each of their respective files: Implement UserRepository package com.twilio.trivia..repository; import com.twilio.trivia.model.User; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface UserRepository extends JpaRepository<User, Long> { } Implement GameRepository package com.twilio.trivia.model.User.repository; import com.twilio.trivia.model.Game; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface GameRepository extends JpaRepository<Game, Long> { } Implement RealTimeDataRepository package com.twilio.trivia.repository; import com.twilio.trivia.model.RealTimeData; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RealTimeDataRepository extends JpaRepository<RealTimeData, Long> { RealTimeData findByGameId(Long gameId); } Implement QuestionRepository package com.twilio.trivia.repository; import com.twilio.trivia.model.Question; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface QuestionRepository extends JpaRepository<Question, Long> { Question findByGameId(String gameId); } Build the services The business logic for how the user will interact with the app is implemented in this section. Integrate the Twilio API The logic for communicating with the Twilio Programmable SMS API must be implemented by the TwilioConfigService.java class within the service package. Always avoid directly displaying your Twilio credentials or any other sensitive information in this or any other file. This may be done in a variety of ways. Declaring your environmental variables in the application.properties (or application.yml) file is nonetheless a popular strategy. Go to the application.properties file in the resources directory and paste the configuration below into it: #... Other environmental variables #Twilio Config ACCOUNTSID=<ACCOUNT_SID> AUTHTOKEN=<ACCOUNT_TOKEN> FROMNUMBER=<YOUR_TWILIO_NUMBER> Paste the following code snippet into the TwilioConfigService.java file. import com.twilio.Twilio; import com.twilio.rest.api.v2010.account.Message; import com.twilio.type.PhoneNumber; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; @Service @Slf4j public class TwilioConfigService { @Value("${ACCOUNTSID}") private String accountSID; @Value("${AUTHTOKEN}") private String authToken; @Value("${FROMNUMBER}") private String fromNumber; public void sendMessage(String toNumber, String messageBody) { Twilio.init(accountSID, authToken); Message message = Message.creator(new PhoneNumber(toNumber), new PhoneNumber(fromNumber), messageBody).create(); } } The sendMessage() function requires two parameters: toNumber, which is the receiver's phone number, and messageBody, which is the text message that will be delivered to the recipient. The accountSID and authToken are used to initialize the Twilio API client using the Twilio.init() function. The Message.creator() function generates a new SMS message with the message body, sender phone number, and receiver phone number specified. The new() function transmits the message and returns a Message object with the message's specifics, such as the SID. You must update the default values for accountSID, authToken, and fromNumber with your Twilio account SID, authentication token, and phone number from the Twilio dashboard in order to utilize this TwilioService class. You can then use the Twilio API to send SMS messages by creating an instance of this class in your application and calling the sendMessage() function. Implement the Trivia Services This class is responsible for starting, playing and ending a game. In the Service package, create a file named TriviaGameService.java and paste the code below in it. The full class and its explanation are given below: import com.twilio.trivia.model.*; import com.twilio.trivia.repository.GameRepository; import com.twilio.trivia.repository.QuestionRepository; import com.twilio.trivia.repository.RealTimeDataRepository; import com.twilio.trivia.repository.UserRepository; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.time.LocalDateTime; import java.util.Map; @Service @Slf4j public class TriviaGameService { @Autowired private UserRepository userRepository; @Autowired private QuestionRepository questionRepository; @Autowired private GameRepository gameRepository; @Autowired private RealTimeDataRepository realTimeDataRepository; @Autowired private TwilioConfigService twilioConfigService; public User createUser(User user) { return userRepository.save(user); } public Game startGame() { Game game = new Game(); game.setStartTime(LocalDateTime.now()); game = gameRepository.save(game); RealTimeData realTimeData = new RealTimeData(); realTimeData.setGameId(game.getId()); realTimeDataRepository.save(realTimeData); return game; } public RealTimeData checkGameData(Long gameId) { return realTimeDataRepository.findByGameId(gameId); } public String addUserToGame(Long userId, Long gameId) { RealTimeData realTimeData = realTimeDataRepository.findByGameId(gameId); realTimeData.getPlayerIds().add(userId); realTimeData.getScores().put(userId, 0); realTimeDataRepository.save(realTimeData); return "User has been successfully added"; } public Question createQuestion(CreateQuestionRequest createQuestionRequest) { Question question = new Question(); question.setQuestionText(createQuestionRequest.getQuestionText()); question.setCorrectAnswer(createQuestionRequest.getCorrectAnswer()); question.setGameId(createQuestionRequest.getGameId()); // send question to all game players RealTimeData realTimeData = realTimeDataRepository .findByGameId(Long.valueOf(createQuestionRequest.getGameId())); for (Long userId : realTimeData.getPlayerIds()) { String userPhoneNumber = userRepository.findById(userId).get().getPhoneNumber(); twilioConfigService.sendMessage("+234" + userPhoneNumber.substring(1), createQuestionRequest.getQuestionText()); } log.info(String.format("message sent: %s", createQuestionRequest.getQuestionText())); return questionRepository.save(question); } public String sendAnswer(Long userId, String gameId, int correctAnswer) { Question question = questionRepository.findByGameId(gameId); if (question.getCorrectAnswer() != correctAnswer) { return "Wrong answer. Thank you for trying"; } RealTimeData realTimeData = realTimeDataRepository.findByGameId(Long.valueOf(gameId)); int currentScore = realTimeData.getScores().get(userId); int newScore = currentScore + 1; realTimeData.getScores().put(userId, newScore); realTimeDataRepository.save(realTimeData); return "You are correct. Well done!"; } public void endGame(Long gameId) { Game game = gameRepository.findById(gameId).orElse(null); assert game != null; game.setEndTime(LocalDateTime.now()); game.setOngoing(false); gameRepository.save(game); } String userWithHighestScore = null; public String sendWinnerNotification(String gameId) { Long winnerId = null; RealTimeData realTimeData = realTimeDataRepository.findByGameId(Long.valueOf(gameId)); Map<Long, Integer> realTimeDataScores = realTimeData.getScores(); int highestScore = Integer.MIN_VALUE; for (Map.Entry<Long, Integer> entry : realTimeDataScores.entrySet()) { Long userId = entry.getKey(); int score = entry.getValue(); if (score > highestScore) { highestScore = score; userWithHighestScore = userRepository.findById(userId).get().getName(); winnerId = userId; } } String message = String.format("Hello there. Player with ID %s has won the trivia. Congratulations to " + "them. Thanks for playing", winnerId); for (Long userId : realTimeData.getPlayerIds()) { String userPhoneNumber = userRepository.findById(userId).get().getPhoneNumber(); twilioConfigService.sendMessage("+234" + userPhoneNumber.substring(1), message); } log.info(message); return "Notification sent"; } } The class is annotated with @Service, indicating that it is a Spring service component. The service has several dependencies autowired using the @Autowired annotation, including repositories such as UserRepository, QuestionRepository, GameRepository, and RealTimeDataRepository. Additionally, there is a dependency on TwilioConfigService. The createUser() method creates a new user by saving the provided user object using the UserRepository. The startGame() method initializes a new game by creating a new Game object, setting the start time using LocalDateTime.now(), and saving it to the GameRepository. It also creates a new RealTimeData object associated with the game and saves it to the RealTimeDataRepository. The addUserToGame() method adds a user to a specific game by updating the RealTimeData object associated with the game in the RealTimeDataRepository. It adds the user ID to the list of player IDs and initializes their score to 0. The createQuestion() method creates a new question by mapping the request data to a Question object and saving it to the QuestionRepository. It also sends a message to all players in the game using the twilioConfigService.sendMessage() method. The sendAnswer() method handles a player's answer to a question. It retrieves the corresponding question from the QuestionRepository, checks if the answer is correct, updates the player's score in the RealTimeData object, and marks the question as answered. The endGame() method marks the game as ended by retrieving the game object from the GameRepository and updating its end time and ongoing status. The sendWinnerNotification() method sends a notification to all players in the game, notifying them about the winner. It retrieves the scores from the RealTimeData object, determines the player with the highest score, sends a message to all players using the twilioConfigService.sendMessage() method, and returns a notification message. Build the controller In the root package of your project, create a project named controller. Inside it, create a file with the name TriviaGameController.java. Paste the code below into it: import com.twilio.trivia.model.CreateQuestionRequest; import com.twilio.trivia.model.Game; import com.twilio.trivia.model.RealTimeData; import com.twilio.trivia.model.User; import com.twilio.trivia.service.TriviaGameService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; @RestController @RequestMapping("/api") public class TriviaGameController { @Autowired private TriviaGameService triviaGameService; @PostMapping("/create-user") public ResponseEntity<?> createUser(@RequestBody User user) { return new ResponseEntity<>(triviaGameService.createUser(user), HttpStatus.OK); } @PostMapping("/game/add-user/{userId}/{gameId}") public ResponseEntity<?> addUserToGame(@PathVariable Long userId, @PathVariable Long gameId) { return new ResponseEntity<>(triviaGameService.addUserToGame(userId, gameId), HttpStatus.OK); } @PostMapping("/start-game") public ResponseEntity<Game> startGame() { Game game = triviaGameService.startGame(); return new ResponseEntity<>(game, HttpStatus.OK); } @GetMapping("/see-game/{gameId}") public ResponseEntity<?> seeGameLiveData(@PathVariable Long gameId) { RealTimeData game = triviaGameService.checkGameData(gameId); return new ResponseEntity<>(game, HttpStatus.OK); } @PostMapping("/send-question/") public ResponseEntity<?> sendQuestion(@RequestBody CreateQuestionRequest createQuestionRequest) { return new ResponseEntity<>(triviaGameService.createQuestion(createQuestionRequest), HttpStatus.OK); } @PostMapping("/submit-answer/{userId}/{gameId}/{correctAnswer}") public ResponseEntity<?> submitAnswer(@PathVariable Long userId, @PathVariable String gameId, @PathVariable int correctAnswer) { return new ResponseEntity<>(triviaGameService.sendAnswer(userId, gameId, correctAnswer), HttpStatus.OK); } @PostMapping("/end-game/{gameId}") public ResponseEntity<?> endGame(@PathVariable Long gameId) { triviaGameService.endGame(gameId); return new ResponseEntity<>(HttpStatus.OK); } @PostMapping("/send-notification/{gameId}") public ResponseEntity<?> concludeGame(@PathVariable String gameId) { return new ResponseEntity<>(triviaGameService.sendWinnerNotification(gameId), HttpStatus.OK); } } The class is annotated with @RestController, indicating that it is responsible for handling incoming requests and returning responses in a RESTful manner. The @RequestMapping("/api") annotation defines the base URL path for all the endpoints defined in this controller. The controller has a dependency on the TriviaGameService class, which is autowired using the @Autowired annotation. This allows the controller to utilize the services provided by the TriviaGameService for handling requests. The controller defines several request mapping methods to handle different types of requests: The createUser() method is mapped to the HTTP POST request with the path "/create-user". It expects a User object in the request body and returns a ResponseEntity containing the response body and HTTP status. The addUserToGame() method is mapped to the HTTP POST request with the path "/game/add-user/{userId}/{gameId}". It extracts the user ID and game ID from the path variables and returns a ResponseEntity containing the response body and HTTP status. The startGame() method is mapped to the HTTP POST request with the path "/start-game". It starts a new game by invoking the startGame() method from the TriviaGameService and returns a ResponseEntity containing the response body (a Game object) and HTTP status. The sendQuestion() method is mapped to the HTTP POST request with the path "/send-question/". It expects a CreateQuestionRequest object in the request body and returns a ResponseEntity containing the response body and HTTP status. The submitAnswer() method is mapped to the HTTP POST request with the path "/submit-answer/{userId}/{gameId}/{correctAnswer}". It extracts the user ID, game ID, and correct answer from the path variables and invokes the sendAnswer() method from the TriviaGameService. The endGame() method is mapped to the HTTP POST request with the path "/end-game/{gameId}". It extracts the game ID from the path variable and invokes the endGame() method from the TriviaGameService. The concludeGame() method is mapped to the HTTP POST request with the path "/send-notification/{gameId}". It extracts the game ID from the path variable and returns a ResponseEntity containing the response body and HTTP status. The methods in the controller delegate the actual processing of the requests to the corresponding methods in the TriviaGameService by invoking the appropriate methods and passing the necessary parameters. The responses from the TriviaGameService methods are wrapped in ResponseEntity objects, allowing customization of the response body and HTTP status codes. Test your SMS trivia game The code for this game can be found in this GitHub repository. The app will now be tested to ensure that it performs as intended. Click the green start button in the top-right corner of your IDE to launch the project. On the terminal of your IDE, you ought to see a notification that says "Started application..." Try out the endpoints established in the controller class by opening Postman. API testing may be done with Postman. Create a fictitious user, begin a game, add users to it, set the questions, receive entries as answers, get and announce a winner. Create a user Paste the URL "http://localhost:9009/api/create-user" in the URL field of the Postman program after starting it up. Select the 'Body' tab. pick 'raw' from the alternatives listed beneath it, and then click the dropdown arrow to the extreme right of the same list to pick 'JSON'. Paste the following text into the available space and make a POST request to the URL: { "name": "Test User", "phoneNumber": "08123456789" } Push the send button, then. The creation of a user would be successful. Start a game To start a game, paste the URL in the text box and click send or press Enter; making a POST request in order to start a game. A Game object should be created now. Add users to the game Follow the steps above to create a user; add a number of users. For sake of brevity, I will use two pre-added users, with IDs 2 and 3 respectively. Now, add the two new users to the game. Paste the URL into the URL box: "http://localhost:9009/api/game/add-user/{userId}/{gameId}" . Replace the placeholders (userId and gameId) with their actual values. In the image above, we added a user with the ID 4 to the game with the ID 2. You can add more users with the same approach. I have gone ahead to add two more users to the game - users ID 2 and ID 3. In the TriviaGameService.java, we wrote that, when a game is started, a RealTimeData object is also created at that time. To confirm that this happened, paste the following URL in the Postman URL text box ("http://localhost:9009/api/see-game/<gameId>") and make a GET request. Remember to replace the gameId with the actual value. You see the two players we added to the game and their score counts: Send questions to players Now, time to put out the question. Paste the following in the URL box "http://localhost:9009/api/send-question/>" and then paste this following request in body section: { "questionText": "Q. Burna Boy is \n, 1. Ghanaian \n 2. Nigerian 3. French \n 4. British", "gameId": 1, "correctAnswer": 2 } Make a POST request. Send an answer to the question This time, the players get to respond to the question by sending in their preferred answers. Recall that our game has two players. We will assume that one player got the answer correctly and the other was wrong. Then we check the real time data for the game for updates. First, paste the URL in the text box: "http://localhost:9009/api/submit-answer/{userId}/{gameId}/{correctAnswer}". Replace the placeholders in the URL with their actual values. First we will test the incorrect answers from players with userId 2 and 3. Then a correct answer from a player with userId 4. Now, check the real time data to see that there is progress in the game. The player with ID "2" should have a score of 1 and the player with ID "1" should still remain at 0. Go to "http://localhost:9009/api/see-game/<gameId>" and make a GET request. Finish the game and send the winner update The game can continue for as long as it is wanted. At the end, a winner should emerge and a notification should be sent to all players, as shown below. Paste the URL in the text box "http://localhost:9009/api/send-notification/{gameId}". Replace the placeholder with the actual value for the game ID and make a POST request. What's next for SMS trivia games in Java? You will need a model to store information about the music tracks or artists featured in the game, or a model to store information about the festival group hosting the game. Creating a successful music trivia game requires careful consideration of various factors, such as the difficulty level of the questions, the format of the game, and the prizes or rewards for winners. It's important to strike a balance between challenging questions and questions that are accessible to everyone. Additionally, the format of the game should be direct; whether it's a traditional quiz format or a more interactive game that involves physical challenges or team collaboration. Building a music trivia game for your festival group can be an excellent way to enhance the overall festival experience. Happy coding. Tolulope is a growing software engineer who loves coding and writing about what he does. When he's not coding, he enjoys his time as a disc jockey.
Traditional Automated Speech Recognition (ASR) tools (or Speech to Text (STT) tools) are great and powerful tools, especially if you constrain the range of possible answers or caller utterances to a manageable set from well established or longstanding caller behavior. But if you’d like to adopt a more conversational tone with your callers and give callers a wider array of choices, then connecting from Twilio (using our <Virtual Agent> noun and one-click Studio Connector) to a predictive AI tool like Google’s Dialogflow CX may be just the solution you need. It can help you with structuring your set of to-be-detected Intents, creating a set of Training Phrases (training data), and creating Action/Responses for your <Virtual Agent> “bot”. Dialogflow bots are especially useful when one of Google’s Prebuilt Agent templates suits your use case. Twilio’s integration and One-click Connection between Dialogflow and Studio <connect_to><Virtual Agent> widget makes it straightforward to get started. To learn how to build a Dialogflow CX <Virtual Agent> bot or conversational IVR or see it in action, see Twilio Developer Education’s awesome “Level up” session covering Dialogflow basics, led by Sarah Deaton, here Hints for using Google’s Dialogflow CX as the Twilio <Virtual Agent> bot 1. For Dynamic and Advanced Settings, such as Multi-lingual Bots, use <Config> A bot designer or developer working with Dialogflow on Twilio can specify the language for speech recognition in the Connector, or dynamically using the <Config> TwiML noun nested inside of the <Connect> verb and <VirtualAgent> noun pair. (Google has written about how they train and develop the ASR underlying their bots for use with multiple languages.) Using Config will override your underlying Dialogflow CX Connector's configuration, and pass additional parameters that can change other behaviors of the virtual agent. <Config> has two attributes: name and value, which must be used every time you use <Config>. You must include a new <Config> noun for each configuration option you want to override. The name attribute corresponds to one of your Dialogflow CX Connector's configuration options, such as language, sentimentAnalysis, voiceName, and welcomeIntent. Additionally, some attributes are not present in your Dialogflow CX Connector configuration, such as voiceModel, speechModel, and speechModelVariant. You can still set these attributes using the <Config> noun nested inside of <VirtualAgent>. For example, if you want to customize the TTS voice and language for the virtual agent interaction, you can supply the respective configuration settings inside the <Config> noun as follows: <Response> <Connect> <VirtualAgent connectorName="uniqueName"> <Config name="language" value="en-us"/> <Config name="voiceName" value="en-US-Wavenet-C"/> </VirtualAgent> </Connect> </Response> 2. Better, Consistent Voice Prompting Twilio’s Text-To-Speech now supports Google Voices in Public Beta, and many new voices for different languages and genders have been added to the catalog. This enables you to deliver a consistent experience by using the same prompting voice across all parts of your Twilio application. For example, you can use the same voice when using Conversational AI from Google DialogFlow in combination with other Twilio-prompted <Say> interactions, such as Twilio Verify for 2FA (two-factor authentication), or shortly, Twilio-wide with tools like Twilio <Pay> to capture payments in a PCI-compliant manner. Any supported Google TTS voice not currently listed and available in the Dialogflow CX Connector dropdown list can be selected via the voiceName configuration in TwiML. In Studio, you can set an unlisted voice through the same key-value pair in the optional Configurations section of the Studio widget. 3. Pull in call-customizing helpful information into Dialogflow from Twilio and Studio Input parameter variables are a beautiful thing. Suppose you want to personalize a caller's experience of your bot by greeting them by name. In that case, you don't have to find your customer's name in a separate CRM database lookup step if their name is in the CNAM variable Twilio or the connect_agent Studio widget already sees. Here's a list of the input variables Studio has available to send Dialogflow. 4. Align subaccounts and bots For subaccounts (especially as used by ISVs – Independent Software Vendors – with their end-customers), we have another list of best practices. We suggest ISVs have separate bots for each sub-account, for referencing the correct end-customer’s input data. Even though billing (e.g., for charge-backs) is already reported on a per sub-account basis, we still also recommend using parameters to feed in unique data or prompts to a bot sitting at the Twilio sub-account level versus wrestling with the complexity of trying to do so for multiple customers with a bot sitting at the parent-account level. In the former (recommended) approach, each subaccount would configure its own Connector between its Dialogflow GCP project and the Twilio sub-account and Studio flow (if used by the ISV, instead of TwiML). In either case, billing and usage would be visible on a sub-account level. 5. Build smarter bots – train and optimize your bot using relevant data We've tried to make creating the first version of a bot as easy as possible (see our best practices) – but you should also reserve the right for your bot to get smarter in subsequent iterations of your voice workflows. You should always be trying to optimize your customers' experience while calling your business' voice front door. For instance: what if your bot were smart enough to recognize if many customers were asking for the same seasonal special? What would be the business outcome of capturing that demand? With Twilio and Google's tools combined, agility like this is possible. Google Dialogflow CX has sophisticated ways of generating a good combinatorial matrix of training phrases to identify caller Intents. The builder or designer need only enter a few prompts – Google recommends around ten or more. However, the best training data for improving and optimizing a bot is what customer callers actually say to the bot itself. Hooking calls up to Twilio Call Recording and Transcriptions and Twilio Voice Intelligence to pull language operators out of structured voice conversations data can translate into discovering new Intents not already captured in the training data you entered into the Google Dialogflow CX bot. Consider what you might find: competitor mentions, churn intent (and reasons), etc. With improving technology, the proper settings, and these tips and best practices, today's predictive AI "Virtual Agent" bots using Automated Speech Recognition (ASR) from Twilio and Google can provide incredible recognition accuracy performance. That holds true even in challenging, noisy environments. Twilio is the right "tool" for connecting these newly automated self-service applications to mobile and PSTN callers, and we can't wait to see the conversational experiences you build! If you're exploring a Dialogflow bot and still need to read through our best practices for automatic speech recognition, find them here. And whether you use a Dialogflow bot or not, read how Voice Insights can help you increase the performance of your speech recognition solution. For more best practices, see Twilio’s Interactive IVR tour, or see our 11 best practices for Speech Recognition and Virtual Agent Bots. Russ Kahan is the Principal Product Manager for <gather> Speech Recognition, Dialogflow Virtual Agents, Media Streams and SIPREC at Twilio. He’s enjoyed programming voice apps and conversing with robots since sometime back in the late nineties – when and this stuff was still called “CTI,” for “Computer Telephony Integration” – but he also enjoys real-world pursuits like Scouting, skiing, swimming, and mountain biking with his kids. Reach him at rkahan [at] twilio.com Jeff Foster is a Software Engineer on Twilio's Programmable Voice team, and he’s been working on Speech Rec at Twilio for the last 6 years – including the original Dialogflow prototype implementations more than 2 years ago. He can be reached at jfoster [at] twilio.com. Ramón Ulldemolins Andreu is a Product Manager for Twilio Voice. He works to support companies transform their business embracing technology to build for digital data-driven engagement at scale. He also loves traveling, experience local culture and food, and live music. He can be reached at rulldemolinsandreu [at] twilio.com
Conversational bot designers and developers – as well as callers into speech-enabled Interactive Voice Response (IVR) and Virtual Agents, alike – are continually asking themselves the same questions: “Why doesn’t this bot understand me? What more does it need to be able to understand what I just said to it?” While AI-based Automated Speech Recognition (ASR) can be inherently challenging (especially in noisy environments) and there are inherent accuracy and latency trade-offs to navigate, there are ways to improve speech recognition performance. This post will give you the best practices to maximize the odds of a superior automated self-service experience with Twilio. Although the tools provided in the Twilio CPaaS tool bench are powerful, some of the coolest features in our recently GA’d <Virtual Agent> bot from Google remain somewhat hidden. Read on below (or after, in our post on Dialog CX tips) for how to get at the best parts of them. Twilio’s Recommendations for improving <Gather><Speech> Recognition in an IVR By implementing the following tips and recommendations, you can increase the likelihood that Google’s ASR used by Twilio will recognize spoken text correctly and that the customer’s application (or Twilio Studio Flow) can take the appropriate next action to attempt to confirm or validate the caller’s input. These best practices will minimize disturbance to the caller, delivering a more conversational IVR or customer engagement experience (or automated self-help experience). All of this will reduce caller frustration and ensure better overall efficiency and cost performance of any Twilio customer’s IVR system. 1. Rely on the (ever-improving) powers of mobile devices Twilio recommends that customers utilize the mobile phone’s microphone for improved audio quality, along with the noise-canceling features already available on devices themselves. To minimize outside noise interference, we recommended using the phone's handset mode rather than speakerphone mode to capture user input. By reducing the impact of background noise on speech recognition, noise-canceling microphones and acoustic echo cancellation can significantly enhance recognition accuracy. 2. Choose a high-quality PSTN connection provider You cannot recognize speech on calls that do not get successfully connected to your app. Twilio has high-quality, reliable interconnects with multiple providers, serving both Inbound and Outbound calling use cases (including Number Porting) – at cloud/elastic scale – the world over. Don’t let poor connectivity foil your attempts to engage and service your customers! 3. Leverage "Hints" in <Gather> Verb to the max Include all the possible inputs that a user may speak as part of the "Hints" in the <Gather> verb. Add as many as you like into your code; there is no scaling penalty running an app with 1 or 10 hints vs. 99 (we allow hundreds – here are the current limits). Adding these will guide users’ input and increase the likelihood of accurate recognition. If you’re expecting an Address or a Dollar/Currency amount, these are particularly relevant. Here are some examples of supported class tokens by language in Twilio’s Docs and Google’s Docs: $ADDRESSNUM (street number), $STREET (street name), and $POSTCALCODE $MONEY (amount with currency unit) $OPERAND (numeric) DTMF, etc. In this next example, we use the Class Token $OOV_CLASS_DIGIT_SEQUENCE as the account number requested is numbers. The action URL will send the result to the Application URL when Gather completes. Digits <?xml version="1.0" encoding="UTF-8"?> <Response> <Gather action="https://actionurl.html" input="speech" timeout="3" hints="$00V_CLASS_DIGIT_SEQUENCE"> <Say> Please speak your account number </Say> </Gather> </Response> Temperature <?xml version="1.0" encoding="UTF-8"?> <Response> <Gather action="https://actionurl.html" input="speech" timeout="3" hints="$00V_CLASS_TEMPERATURE"> <Say> Please speak your local temperature. </Say> </Gather> </Response> Phone Number <?xml version="1.0" encoding="UTF-8"?> <Response> <Gather action="https://actionurl.html" input="speech" timeout="3" hints="$00V_CLASS_FULLPHONENUM"> <Say> Please speak your Phone Number </Say> </Gather> </Response> Street Address <?xml version="1.0" encoding="UTF-8"?> <Response> <Gather action="https://actionurl.html" input="speech" timeout="3" hints="$00V_CLASS_ORDINAL"> <Say> Please speak your account address, followed by the pound sign </Say> </Gather> </Response> Something you define <Response> <Gather input="speech" hints="this is a phrase I expect to hear, keyword, product name, name"> <Say>Please say something</Say> </Gather> </Response> <Gather> speech recognition is not yet optimized for alphanumeric inputs (e.g., ABC123). There are many homonyms in alphanumerics, which make them harder to recognize. Please see Tip #11 for more information on dealing with mixed alphanumerics. Using hints to discern between relevant and irrelevant homonyms depending on use case, (e.g., between “chicken” and “checking,”) is one strategy; re-prompting based on the available relevant choices (e.g., “I think you said ‘checking’ not savings – is that correct?”) is another. A third strategy is to use a virtual agent “bot” that can make probabilistic statistical “informed guesses” (i.e., use predictive AI) to pick the best choice from among relevant alternatives, such as our integration with Dialogflow CX. 4. Use Enhanced Speech Recognition, and pick the Twilio <Gather><Speech> Google ASR Speech model best suited for your use case The enhanced attribute instructs <Gather> to use a premium speech model that will improve the accuracy of transcription results. The premium speech model is only supported with the phone_call speechModel. The premium phone_call model was built using thousands of hours of training data. It ensures 54% fewer errors when transcribing phone conversations when compared to the basic phone_call model. The following TwiML instructs <Gather> to use premium phone_call model: <Gather input="speech" enhanced="true" speechModel="phone_call"> <Say>Please tell us why you're calling</Say> </Gather> <Gather> will ignore the enhanced attribute if any other speechModel, other than phone_call, is used. For most use cases related to Voice input when collecting short individual utterances from an English speaking user, Twilio recommends using the enhanced phone_call model with speechTimeout set to auto. This is instead of using Google’s default speech model, as phone_call is the speech model best suited for use cases where you'd expect to receive queries such as voice commands or voice search. In languages other than English, for better endpointing (i.e., lower latency start of speech recognition), experimental_utterances may be a better choice. For more on those experimental models, see below. The Dialogflow CX <Virtual Agent> uses Google’s default speech model by default, but other speech models can be specified with the <Config> noun nested inside the <Virtual Agent> noun. The phone_call speech model is also best for audio that originated from a PSTN phone call (typically an 8khz sample rate). This is because of how the model was trained (training data) and the noise tolerance and noise reduction characteristics of the model, particularly the enhanced phone_call speech model. Google has written extensively about how they trained their models and how they perform vs other LLMs. With phone_call, if you don’t use speechTimeout set to auto as suggested above, you will need to enter a positive integer number of seconds for speechTimeout. If you’re picking these variables and aren’t sure of the combination, you can use Twilio’s Notifications which will send a debugger event letting you know the combination is invalid. (See Warning 13335 or Warning 13334). Twilio’s Experimental speech models are designed to give access to Google’s latest speech technology and machine learning research for some more specialized use cases. They can provide higher accuracy for speech recognition versus other available models, depending upon use case and language. However, some features that are supported by other available speech models are not yet supported by the experimental models, such as confidence scores (more on that, below). Of special note, the experimental_utterances model is best suited for short utterances of only a few seconds in length for languages other than English. It’s especially useful for trying to capture commands or other single-shot directed speech use cases (e.g., "press 0 or say 'support' to speak with an agent" in non-English languages). Alternatively, the speech model numbers_and_commands might also work for such cases. The experimental_conversations model supports longer and spontaneous speech and conversations. For example, it is useful for responses to a prompt like "tell us why you're calling today," or capturing the transcript of an interactive session, or longer spoken messages in the 60-second snippets that <Gather> supports. Both experimental_conversations and experimental_utterances values for speechModel support the set of languages listed here. One final but especially important point when it comes to building speech models into your ASR application: you can change the speech models used multiple times, within a single TwiML application, over the course of multiple questions or prompts, to best suit the type of speech input you’re expecting for potentially each question or prompt. That is, you can specify the speech model, hints, etc., per each individual <Gather> done in a TwiML app, to optimize your speech results’ accuracy. 5. Engineer your prompts to encourage natural and clear speech input Encourage users to speak naturally and avoid rushing during interactions with the IVR system. Natural speech patterns improve the accuracy of speech recognition. First, you’ll want prompts to be long enough for Twilio and Google to be ready for the speech input – but not so long that the user is put off. Telling a user what or how to speak – or giving examples – isn’t a bad idea. In addition, the prompting questions asked should either: Be sufficiently narrow in scope that a generalized speech recognition engine has a decent chance of recognizing the answers from amongst a limited set of possible valid ones (for example, using plenty of verbal cues, such as “you can say things like ‘account balance’, or ask when your local branch is open”) or If a very broad question is the right starting point for conversations with your customers, consider using other, more structured tools for managing detected intents, utterances and phrases. Additionally, take advantage of those tools’ auto-generation of training (phrases) data and management of homonyms. In short, if the set of answers and possible actions is small and short, building your own bot with ASR tools alone is a great idea. If the list of answers and actions is long, getting successful recognitions and correct routing and answers can be complicated, so consider also using a predictive AI bot-building tool like Twilio’s <Virtual Agent> connector using Google Dialogflow CX in addition to speech recognition. 6. Keep it clean (if you want) The profanityFilter attribute of <Gather> specifies whether Twilio should filter profanities out of your speech recognition results and transcription. This attribute defaults to true, which replaces all but the initial character in each filtered profane word with asterisks. You can also use Twilio Voice Intelligence and recorded transcripts to detect customer sentiment for later flagging or Segment profile updating. 7. Offer DTMF as Backup Provide Dual-Tone Multi-frequency (DTMF), also known as “touch tones,” as an alternative input method when speech recognition fails. This allows users to input responses using the keypad if needed. The input attribute allows you to specify which inputs (DTMF or speech) Twilio should accept – the default input for <Gather> is dtmf, but you can set input to dtmf, speech, or dtmf speech. If you’re expecting DTMF but the input from the caller might be speech, see the “Hints” section above in tip #3. You can set the number of digits you expect from your caller by including numDigits in <Gather>`. If you set input to speech, Twilio will gather speech from the caller for a maximum duration of 60 seconds. If you set dtmf speech for your input, the first detected input (speech or dtmf) will take precedence. If speech is detected first, finishOnKey(finish on a specified DTMF key press) will be ignored. 8. Stream it Particularly if multiple call orchestration steps in real-time are NOT required (depending on your use case), consider using Twilio Media Streams with external speech recognition providers. Explore the option of using media streams to send speech data to an external speech recognition provider through the Twilio Marketplace. Twilio Marketplace speech recognition partners can, for example, develop vocabularies optimized for certain industry verticals or use cases, or optimize for longer speech recognition “batching,” leading to improved recognition accuracy and performance in your application built as a Twilio customer. But, do note that Media Streams doesn’t yet support DTMF – that’s coming in a future version of Media Streams. 9. Leverage confidence scoring in the prompting application When the caller finishes speaking or entering digits (or the timeout is reached), Twilio will make an HTTP request to the URL that the action attribute takes as a value. Twilio may send some extra parameters with its request after the <Gather> ends. If you specify speech as an input with input="speech", Twilio will also include a Confidence parameter value along with the recognized speech result. Confidence contains a confidence score between 0.0 and 1.0 (the percentage confidence level of the result from 0% to 100% confidence). A higher confidence score means a better likelihood that the transcribed speech result is accurate. Not all speech models return Confidence as a required field. Depending on your model choice, your code should not expect Confidence to be returned, but if it is present, you can leverage its value to take various actions. After <Gather> ends and Twilio sends its request to your action URL, if the Confidence score is present you can act on the result. Speech Recognition will never explicitly tell you it didn’t recognize a word, you need to infer that from the Confidence score. For example, you could run a re-prompting routine on the user after a result below a specific threshold Confidence (e.g., < 0.2), until recognized. Reprompting after low Confidence scores rather than simply moving forward with an empty or low-confidence speech recognition result can avoid 500 errors sent back from your programmatic endpoint – or frustrating end users. Using re-prompting cleverly (for instance, while applying hints, and with a more specific, constrained choice re-prompt) to select from amongst the available relevant choices is a great “combination” strategy, combining this tip with tip 3 above. 10. Don’t exhaust your callers’ patience After a reasonable number of retries – likely two or three at most – try another tactic. After a certain number of failures, consider transferring the call to a live agent, who may be better able to cope with noisy, indistinct, or unexpected input. Studio and Dialogflow CX make this straightforward with a “Live Agent Handoff” option configurable on the Studio widget. Or, if a customer’s response or question is sufficiently off-script but you still wish to handle it with an automated agent, consider doing a voice-enabled generative AI search for an answer to their query. 11. Implement 2FA or other post-processing techniques to deal with for near-homonyms Though its capabilities are getting better quickly, ASR struggles mightily with homonyms: words or phonemes that sound alike, but have different meanings. In particular, alphanumerics – for example, an insurance policy number, bank account number, or patient ID with both letters and numbers in it – can be extremely problematic. Unfortunately, these are also quite commonly needed in self-service (IVR) automation use cases where ASR is used in delivering Notifications. One solution to this is to use a combination of tools, such as Twilio Verify for Two-Factor Authentication (2FA) to request only a portion of a mixed alphanumeric number. For example, you could ask the last four digits, or only the numeric section of an ID. With part of an ID, you can then verify via a text message that the system has looked up not only the correct account number but also that the system is talking to the correct person. Other (post-processing) solutions involve using 2FA along with some combination of the above techniques: prompting upon getting a low-confidence recognition score, prompt engineering (to be more specific around letters used in the reprompt), and so on. Maximizing your chance of success with Automatic Speech Recognition applications Hopefully, this post has given you some valuable hints, tips, and tricks to architect your speech recognition application for success. By implementing these best practices, you’ll find yourself with happier users – both customers and agents, alike! Once you’ve implemented the best practices, read our next post in this series about using Google Dialogflow CX’s Virtual Agent Bot. Russ Kahan is the Principal Product Manager for <gather> Speech Recognition, Dialogflow Virtual Agents, Media Streams and SIPREC at Twilio. He’s enjoyed programming voice apps and conversing with robots since sometime back in the late nineties – when and this stuff was still called “CTI,” for “Computer Telephony Integration” – but he also enjoys real-world pursuits like scouting, skiing, swimming, and mountain biking with his kids. Reach him at rkahan [at] twilio.com Jeff Foster is a Software Engineer on Twilio's Programmable Voice team, and he’s been working on Speech Rec at Twilio for the last 6 years – including the original Dialogflow prototype implementations more than 2 years ago. He can be reached at jfoster [at] twilio.com.
Making HTTP requests is a core feature of modern programming, and is often one of the first things you want to do when learning a new programming language. For Java programmers there are many ways to do it, including core libraries in the JDK and third-party libraries. In this article, you will learn about different ways to make HTTP requests from your Java code, as well as the updates and recommendations for Java core features and popular libraries that developers can use to make HTTP requests. Prerequisites Sign up for NASA's Astronomy Picture of the Day (APOD) API key. Check your email for the API key and remember to not share it with anyone. Note that in this post we are using DEMO_KEY which is provided by NASA to explore the API, but has low rate-limits and you are encouraged to create your own key. If you do this then substitute DEMO_KEY for your key in the samples below. Java version 11 or later A good Java IDE. I use Intellij IDEA but Eclipse and Netbeans are very capable too. Reading the NASA APOD response data The responses from the APOD API are in JSON format. We will be pulling a few fields out of each response so we need to parse the JSON into a Java object. Different HTTP libraries have different levels of support for JSON, but none of them knows about the format of APOD API responses so we have to define a class to match the format of the response ourselves. If you want more detail about this code please do read our post Three ways to use Jackson for JSON in Java. Create a class named APOD.java with the following code to model the results of the API requests: package com.twilio; import com.fasterxml.jackson.annotation.JsonProperty; public class APOD { public final String copyright; public final String date; public final String explanation; public final String hdUrl; public final String mediaType; public final String serviceVersion; public final String title; public final String url; public APOD(@JsonProperty("copyright") String copyright, @JsonProperty("date") String date, @JsonProperty("explanation") String explanation, @JsonProperty("hdurl") String hdUrl, @JsonProperty("media_type") String mediaType, @JsonProperty("service_version") String serviceVersion, @JsonProperty("title") String title, @JsonProperty("url") String url) { this.copyright = copyright; this.date = date; this.explanation = explanation; this.hdUrl = hdUrl; this.mediaType = mediaType; this.serviceVersion = serviceVersion; this.title = title; this.url = url; } } Note that this class depends on the Jackson library for the JsonProperty annotation. You will need to add that to your project - in the example repo we are using Maven so <dependency> is needed in pom.xml. This is in the example repo here. Built-in Java Libraries Back when everyone was stuck inside, forced to make HTTP requests for use cases such as searching for COVID-19 vaccines as well as retrieving pictures from NASA, Java developers would use libraries built into the Java platform. These are HttpUrlConnection (since Java 1.1) and Java 11's HttpClient. Java 1.1 HttpURLConnection HttpURLConnection has no significant changes since it was added in 1997, so you can go ahead and make a GET request for the APOD data with the following code: package com.twilio; import com.fasterxml.jackson.databind.ObjectMapper; import java.io.IOException; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; import java.net.URI; public class JavaHttpURLConnectionDemo { public static void main(String[] args) throws IOException { // Create a neat value object to hold the URL URL url = URI.create("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY").toURL(); // Open a connection(?) on the URL(?) and cast the response(??) HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Now it's "open", we can set the request method, headers etc. connection.setRequestProperty("accept", "application/json"); // This line makes the request InputStream responseStream = connection.getInputStream(); // Manually converting the response body InputStream to APOD using Jackson ObjectMapper mapper = new ObjectMapper(); APOD apod = mapper.readValue(responseStream, APOD.class); // Finally we have the response System.out.println(apod.title); } } This is a reliable way to make requests if you are supporting clients that use older versions of Java and cannot add a dependency, but these days it's rarely chosen for a new project. Note: HTTPUrlConnection also utilizes the ObjectMapper from the jackson-databind library in order to display the JSON details as we defined in the APOD.java class. Check out the full demo code for the HttpURLConnection on this GitHub repo. Java 11 HttpClient Similar to the HttpURLConnection, the Java 11 HttpClient has not changed much since 2020 and remains a reliable method for making requests. This HttpClient was in development and preview for over a year before release so developers had plenty of chance to try it out and provide feedback. This meant that the teams working on Java could release HttpClient with a good degree of confidence that it would not need any significant redesign. It is a much more modern and flexible way of making both asynchronous and synchronous HTTP requests than HttpURLConnection. The client accepts a BodyHandler` that can convert an HTTP response into a class of your choosing. This can be handled synchronously or asynchronously. To make asynchronous requests, use the code below to create a client and retrieve a response, once it's ready: private static void asynchronousRequest() throws InterruptedException, ExecutionException { // create a client var client = HttpClient.newHttpClient(); // create a request var request = HttpRequest.newBuilder( URI.create("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY")) .header("accept", "application/json") .build(); // use the client to send the request var responseFuture = client.sendAsync(request, new JsonBodyHandler<>(APOD.class)); // We can do other things here while the request is in-flight // This blocks until the request is complete var response = responseFuture.get(); // the response: System.out.println(response.body().get().title); } If your application needs it, you might consider running synchronous requests instead. The tradeoff here is that synchronous code is generally somewhat easier to understand as you can avoid thinking about multi-threading, but will probably use more system resources. This will be especially true if using Virtual Threads. To make this code synchronous change the last few lines from the asynchronousRequest function, as in the example below, so that a response is set right after a client is finished sending the request. private static void synchronousRequest() throws IOException, InterruptedException { // create a client var client = HttpClient.newHttpClient(); // create a request var request = HttpRequest.newBuilder( URI.create("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY")) .header("accept", "application/json") .build(); // this line blocks until the response is completed so there is no way to do other processing while it is in flight HttpResponse<Supplier<APOD>> response = client.send(request, new JsonBodyHandler<>(APOD.class)); // the response: System.out.println(response.body().get().title); } Check out the full demo code for the HttpClient on this GitHub repo. Check out third-party libraries Third-party libraries might be your preference since they are helpful to maintain your app. For example, there is a community of developers out there, effectively, helping you debug problems you might encounter. As well as this, external libraries can provide different abstractions and levels of convenience for the coder - balancing this with performance and resource usage concerns. They very often have helpers for creating tests for your HTTP code, which makes a big difference to developer productivity. Check out the following libraries and see if they fit your use case. OkHttp OkHttp is "Square’s meticulous HTTP client for Java and Kotiin", and has been a popular choice for a long time - 2023 marks its tenth birthday. It supports modern features like HTTP/2 and connection multiplexing, which can help improve the efficiency of your application. To create an OkHTTPClient and make requests, use the code below: package com.twilio; import com.fasterxml.jackson.databind.ObjectMapper; import okhttp3.OkHttpClient; import okhttp3.Request; import okhttp3.Response; import java.io.IOException; public class OkHTTPDemo { private static final ObjectMapper mapper = new ObjectMapper(); public static void main(String[] args) throws IOException { OkHttpClient client = new OkHttpClient(); Request request = new Request.Builder() .url("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY") .build(); // defaults to GET Response response = client.newCall(request).execute(); APOD apod = mapper.readValue(response.body().byteStream(), APOD.class); System.out.println(apod.title); } } Check out the full demo code for OkHttp on this GitHub repo. Apache libraries If you've been around long enough to use Commons HttpClient, well, update the naming convention because it's called HttpComponents Client now. Creative right? And something I have to look up every time I need it. Jokes about the naming aside, this is powerful and well-maintained and has a large user community - if you work on Java for any length of time I'm sure you will find a project that uses it. Here is the code to try creating requests: package com.twilio; import com.fasterxml.jackson.databind.ObjectMapper; import org.apache.hc.client5.http.classic.methods.HttpGet; import org.apache.hc.client5.http.impl.classic.CloseableHttpClient; import org.apache.hc.client5.http.impl.classic.HttpClients; import java.io.IOException; public class ApacheHttpClientDemo { private static final ObjectMapper mapper = new ObjectMapper(); public static void main(String[] args) throws IOException { try (CloseableHttpClient client = HttpClients.createDefault()) { HttpGet request = new HttpGet("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY"); APOD response = client.execute(request, httpResponse -> mapper.readValue(httpResponse.getEntity().getContent(), APOD.class)); System.out.println(response.title); } } } Check out the full demo code for the Apache Http Client on this GitHub repo. Retrofit Also developed by Square, Retrofit is a type-safe HTTP client for Android and Java that sits atop OkHTTP and provides an abstraction that works wonderfully with Java Interfaces, allowing standard testing tools for mocking and dependency injection. Converters can also be added to support the integration of other types of libraries. There are serialization libraries that automatically convert JSON responses into Java or Kotlin objects, and they can also serialize your request objects into JSON without manual handling. This is done using Jackson but you don't have to get your hands dirty - Retrofit handles it for you and you only need to deal with Java objects. Here's the code for a demo using Retrofit: package com.twilio; import retrofit2.Retrofit; import retrofit2.converter.jackson.JacksonConverterFactory; import retrofit2.http.GET; import retrofit2.http.Headers; import retrofit2.http.Query; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; public class RetrofitDemo { public interface APODClient { @GET("/planetary/apod") @Headers("accept: application/json") CompletableFuture<APOD> getApod(@Query("api_key") String apiKey); } public static void main(String[] args) throws ExecutionException, InterruptedException { Retrofit retrofit = new Retrofit.Builder() .baseUrl("https://api.nasa.gov") .addConverterFactory(JacksonConverterFactory.create()) .build(); APODClient apodClient = retrofit.create(APODClient.class); CompletableFuture<APOD> response = apodClient.getApod("DEMO_KEY"); // do other stuff here while the request is in-flight APOD apod = response.get(); System.out.println(apod.title); } } Check out the full demo code for Retrofit on this GitHub repo. The Spring Ecosystem Last but certainly not least, if you're using Spring (via Boot or some other combination of its many modules) developers can use matching Spring-provided clients, with 2 options depending on whether you want a synchronous or asynchronous application. Since these are part of the Spring ecosystem, the client integrates seamlessly with other ecosystem modules, such as dependency injection and transaction management. In order to try out these demos, please follow this article to learn how to set up a Java Spring Boot application. Spring RestTemplate The Spring RestTemplate is considered legacy at this point and allows for synchronous calls. Still, we have included it because Spring's (deserved) popularity means it's likely that you could come across it in existing projects. RestTemplate is part of the spring-web module. RestTemplate abstracts away much of the boilerplate code required to create and manage HTTP connections, headers, and payloads for quicker and more convenient requests. There is also convenience and flexibility in exception and error handling. This class can be mocked for unit testing, allowing you to test your code without making actual network requests. You can extend RestTemplate by creating custom request and response interceptors, message converters, and other components to match the specific requirements of your application. If you're building a traditional synchronous application with a blocking I/O model, RestTemplate might be more appropriate, however. Get started with the RestTemplate by setting up a Java Spring Boot application. Then, create a service class in your project directory in a file named NasaApodService.java, and paste the following code into the file: package com.twilio; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Service; import org.springframework.web.client.RestTemplate; @Service public class NasaApodService { private final RestTemplate restTemplate; @Autowired public NasaApodService(RestTemplate restTemplate) { this.restTemplate = restTemplate; } public NasaApodResponse getNasaApod(String apiKey) { String apiUrl = "https://api.nasa.gov/planetary/apod?api_key=" + apiKey; ResponseEntity<NasaApodResponse> responseEntity = restTemplate.getForEntity(apiUrl, NasaApodResponse.class); return responseEntity.getBody(); } } This service class creates a RestTemplate object to make a GET request to NASA's APOD API. The getNasaApod() method retrieves the response from the API, then maps it to a NasaApodResponse class. Create the NasaApodResponse.java model class to store the structure of the JSON response with its respected getters and setters: package com.twilio; public class NasaApodResponse { private String title; private String explanation; private String url; } In order to hit the API endpoint, create the file NasaApodController.java and paste in the following code: package com.twilio; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class NasaApodController { private final NasaApodService nasaApodService; @Autowired public NasaApodController(NasaApodService nasaApodService) { this.nasaApodService = nasaApodService; } @GetMapping("/nasa/apod") public NasaApodResponse getNasaApod(@RequestParam("api-key") String apiKey) { return nasaApodService.getNasaApod(apiKey); } } RestTemplate works nicely with other modules from Spring but it relies on a lower-level library for the actual HTTP work - in this example we're using HttpComponents for that. Please check the example repo for all the details. Once you have the code as it is in that repo you can run the main method in SpringHttpClientsDemoApplication then visit http://localhost:8080/nasa/apod?api-key=DEMO_KEY to see the result. WebClient On the other hand, the WebClient works asynchronously and is recommended for more modern applications. It is helpful, especially, when it comes to utilizing resources and improving overall throughput in an application that has multiple dependencies or microservices. It uses non-blocking operations and can therefore adapt to various reactive runtime environments, such as Reactor, RxJava, and CompletableFuture. WebClient is part of the spring-webflux module. WebClient supports reactive streams, making it possible to handle streaming data both in requests and responses. This is useful for scenarios like uploading or downloading large files. WebClient is well-suited for building highly scalable and responsive applications, especially in microservices architectures, and follows the latest architectural patterns. Get started with the WebClient by setting up a Java Spring Boot application. Then, create a service class in your project directory with the file named NasaApodService.java, and paste the following code into the new file: package com.twilio; import org.springframework.stereotype.Service; import org.springframework.web.reactive.function.client.WebClient; import reactor.core.publisher.Mono; @Service public class NasaApodService { private final WebClient webClient; public NasaApodService(WebClient.Builder webClientBuilder) { this.webClient = webClientBuilder.baseUrl("https://api.nasa.gov/planetary/apod").build(); } public Mono<NasaApodResponse> getNasaApod(String apiKey) { return webClient.get() .uri(uriBuilder -> uriBuilder .queryParam("api_key", apiKey) .build()) .retrieve() .bodyToMono(NasaApodResponse.class); } } This service class creates a WebClient object to make a GET request to NASA's APOD API. The getNasaApod() method retrieves the response from the API, then maps it to a NasaApodResponse class. Create the model NasaApodResponse.java class to store the structure of the JSON response with its respected getters and setters, with the same code as in the RestTemplate example. In order to hit the API endpoint, create the file NasaApodController.java and paste in the following code: package com.twilio; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; import reactor.core.publisher.Mono; @RestController public class NasaApodController { private final NasaApodService nasaApodService; @Autowired public NasaApodController(NasaApodService nasaApodService) { this.nasaApodService = nasaApodService; } @GetMapping("/nasa/apod") public Mono<NasaApodResponse> getNasaApod(@RequestParam("api-key") String apiKey) { return nasaApodService.getNasaApod(apiKey); } } As before you can test it by visiting http://localhost:8080/nasa/apod?api-key=DEMO_KEY in a browser after running the main method. Don't forget to insert your API key. View the full code for the Spring WebClient demo at this GitHub repository. What picture did you receive from the NASA API today? Other HTTP clients for Java There are plenty more ways to integrate an HTTP client into your application. Check out these other third party libraries: REST Assured - an HTTP client designed for testing your REST services. It offers a fluent interface for making requests and helpful methods for making assertions about responses. cvurl - a wrapper for the Java 11 HttpClient which rounds off some of the sharp edges you might encounter making complex requests. Feign - Similar to Retrofit, Feign can build classes from annotated interfaces. Feign is highly flexible with multiple options for making and reading requests, metrics, retries, and more. MicroProfile Rest Client - another client in the "build a class from an annotated interface" mode, this one is interesting because you can reuse the same interface to create a web server too, ensuring that the client and server match. If you're building a service and a client for that service, then it could be the one for you. What's next? Now that you know how to make HTTP requests using various methods in Java, why not try exploring other ways to make requests in different programming languages? Check out these articles below, and we'd love to hear from you if you find them useful: How to Send and Test HTTP Requests for Twilio SMS in Postman How To Make REST API Requests in PowerShell 5 Ways to Make HTTP Requests in Node.js HTTP Methods and Making API Requests with cURL Matthew is a Developer Evangelist at Twilio specialising in Java and other JVM languages. Previously he was a developer working on public cloud infrastructure, and before that a teacher of English as a foreign language. Diane Phan is a software engineer on the Developer Voices team. She loves to help programmers tackle difficult challenges that might prevent them from bringing their projects to life. She can be reached at dphan [at] twilio.com or LinkedIn.