Drupal Planet

Tag1 Consulting: Alex Ross on Drupal, module maintenance, NYC DrupalCamp, and getting kicked out of 30 Rock

1 month 1 week ago

Tag1’s 20 years of Drupal series continues with Alex Ross (bleen | Drupal.org), Sr. Director of Software Engineering at NBC Universal. Alex’s long experience with Drupal starts like many others - where he fell into it almost by accident. From his beginnings as a ‘dumpster diver’ in the issue queue, to becoming an organizer for Drupal Camp NYC, Alex has contributed to Drupal and the community in significant ways. Join Tag1 Managing Director Michael Meyers as he and Alex talk about the history of NYC Drupal Camp, Alex’s journey with and beyond Drupal, and the ups and downs of the changes that have come to Drupal over the last 20 years. You’ll also hear about being a module owner and maintainer, and get a first-hand account of working with the Drupal Security team! --- For a transcript of this video, see Transcript - 20 years of Drupal with Alex Ross. --- ## Additional links These are some of the prominent modules Alex has worked on or maintains: - reCaptcha - DART - Advanced image crop - Akamai ESI Photo by Juan Pablo Mascanfroni on Unsplash

Read more [email protected]… Tue, 08/16/2022 - 05:22

Metadrop: Mocking third-party API in development and test environments

1 month 1 week ago

A big part of our job is to integrate the tools we build with third-party services as efficiently and reliably as possible, which means maintaining robust code that is covered by tests. In one of our projects, we developed a feature that was responsible for adding or removing users to mailing lists managed by an external service, with which we communicated via an API.

On this occasion, we integrated the API using the library provided by the same provider, also developing a custom service as an interface to interact with the library.

The logic we had to apply to decide when to add or remove users from the multiple lists was somewhat complex, as it depended on the different statuses of a user, their account settings,  and roles. 

We decided to duplicate the lists in the external service for development and automated test coverage to have different versions from the production ones. The problem arose when, at a certain point, the list management tests started to fail randomly. Then we realized that the inconsistency was generated by launching tests in parallel from different Merge Requests, as they were acting on the same lists.

The solution was to mock the API. In our case, what we had to mock was not the API but the custom service. With this approach, we decouple the development of the API integration itself from the development of specific functionalities that depend on the…

The Drop Times: Whether Backdrop Is Welcome at DrupalCons?

1 month 1 week ago
The interesting turn of events that led to a hurried and well-thought-out response from the Drupal Association started with a tweet sharing The Drop Times’ report about a prerecorded interview of Dries, webcasted as the opening session for Drupal Camp Colorado. 

Drupal Association blog: Open source community participation at DrupalCons

1 month 1 week ago

Many people and organizations in our community contribute to and/or work with multiple projects that overlap with Drupal, including Wordpress, Backdrop CMS, etc. We welcome all who accept our Drupal Code of Conduct to attend and participate in DrupalCon and its events.

DrupalCon sessions are reviewed and accepted by a volunteer panel. People are welcome to submit any Drupal-related sessions they think will interest attendees. We can’t accept all submissions due to the large number submitted. Typically only about 20% of submissions can be accepted. The most successful are topical to themes/timely for the audience. No one is penalized for sessions not accepted.

I'm currently serving as Acting Executive Director and am making this statement on behalf of the organization. For further questions, you can contact me directly via email: [email protected].  

Talking Drupal: Talking Drupal #360 - Backdrop Case Study

1 month 1 week ago

Today we are talking about Backdrop CMS with Eric Toupin.

www.talkingDrupal.com/360

Topics
  • What is backdrop
  • How did you hear about it
  • Tell us about Aten and your clients
  • What type of work is Aten doing with Stanford
  • Why was Backdrop CMS considered
    • How long was Backdrop out before you considered it
  • Are there features Backdrop has that Drupal does not have
  • What are some limitations of Backdrop
  • If someone has Drupal 7 what do you consider the criteria for Backdrop vs Drupal 9
  • Are you working on other Backdrop sites
  • Do you consider Backdrop it’s own CMS
  • Have you contributed anything back to Drupal from Backdrop
  • Does Aten consider Backdrop a service it provides
Resources Guests

Eric Toupin - www.drupal.org/u/erictoupin

Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Cathy Theys - @YesCT

MOTW

hreflang The core Content Translation module adds hreflang tags only on content entity pages. This module, on the other hand, adds hreflang tags to all pages, and can be configured to defer to Content Translation module on content entity pages. If for some reason you’d like to modify the hreflang tags on a page, you can do so by implementing

Drupal Association blog: Sharing our DrupalCon Prague excitement and a message from Dries!

1 month 1 week ago

DrupalCon Prague is right around the corner! Here at the Drupal Association, we could not be more excited for the conference to be back in person for our European community for the first time in three years. DrupalCon Portland was a great success, and we're reminded that nothing matches the excitement and energy of seeing each other (safely) in person. DrupalCon is an essential opportunity for individuals and organizations in the Drupal ecosystem, a key way to support the programs of the Drupal Association as a non-profit, and the place where many friendships, connections, major Drupal contributions are made, and project milestones achieved. 

Dries shared this invitation for you to join us at DrupalCon Prague:

I'm excited that DrupalCon Europe is back in person and looking forward to meeting the global community in Prague. DrupalCon is the heartbeat of Drupal innovation, a place to grow your network and skills, and a community homecoming. I hope you join us in Prague!

-- Dries Buytaert, Drupal Founder, and Project Lead.

Why DrupalCon?

The DrupalCon events are the largest gathering of the Drupal community in the world. Each year, we meet at regional Drupal camps, meetups, and other events in more than 200 countries. But DrupalCons serve as the community’s primary major gathering.  There is nothing like DrupalCon.  Whether it be to find inspiration, business and networking opportunities, learning, contribution, job networking, or seeing old friends and making new ones from across the globe, DrupalCons offer something for everyone. 
 

With the next conference coming up in September, you may be wondering – what all goes into planning a DrupalCon? From logistics to the program schedule and everything in between, many decisions are made behind the scenes during the planning period. To get you ready and in the conference spirit, we thought it would be helpful to have a look behind the scenes!
 

Who's involved in creating DrupalCon?

DrupalCon is a program of the Drupal Association and is the original reason the Drupal Association was founded! There have been more than 40 DrupalCons, all over the world and online. The Drupal Association staff and board select the host locations for DrupalCon based on several factors: local community presence, venue availability, ease of travel, and strength of the business ecosystem. Every host city selection balances these factors and fundraising for Association programs to support the community. 

The Drupal Association works with key partners and vendors in the region of DrupalCon. In North America, the Drupal Association is the primary event organizer, with event production support from organizations like Social Enterprises and the local conference center staff.   The consultant-based relationship assists our small team with logistics such as venue contracts, event set-up, registration, sponsor support, and more.

In Europe, DrupalCon is licensed to Kuoni Congress, which first partnered with the Drupal Association to bring the event to the European community in 2019. We began this relationship because Kuoni's European staff and regional connections to venues in our host cities are invaluable. As a small organization, having their more local view and understanding of the region to supplement our team has been incredible. After 4+ years of working very closely with Drupal Association staff and local Drupal associations, Kuoni Congress is a firm part of the Drupal community. They’ve worked hard to keep the event alive, even during the last two years when revenue did not cover their cost to execute the event.  We are grateful to Kuoni and Social Enterprises for their continued support and dedication to DrupalCon and the Drupal community.
 

Finally, the other key part of putting together a DrupalCon is you!  No matter where DrupalCon is located, both the Drupal Association and Kuoni Congress rely on volunteers in the community to create programming, organize contribution events, and promote the event. Dozens of volunteers are involved in every event. We could not do it without you! 

Who determines the DrupalCon programming?

For every DrupalCon, we have consulted with passionate community members when selecting programming! For each DrupalCon, submissions are open during our Call for Speakers, where we welcome community members to submit their session ideas. Unfortunately, we do not have the capacity to accept all of the submissions that we receive – we wish that we could! Thus, we have a team of Drupal community volunteers that goes through and recommends the sessions submitted by the community members. This is a careful process that is done with much consideration and time. These community volunteers take on the difficult task of balancing new and returning voices, recommending a variety of topics, and keeping the programming on the pulse of the latest trends in Drupal. We're endlessly grateful for their input.

Any community member is welcome to submit a session during the Call for Speakers, with an emphasis on those who are underrepresented in technology, and we are proud to have our Drupal community members be the ones curating the content! By putting the programming in our community’s hands, we’ve facilitated so many incredible sessions, workshops, and contribution days. All of this is possible thanks to you!

During the planning phases of the conference, our teams and consultants work alongside a variety of committees; the DrupalCon Europe Advisory committee, Summit Committees, Speaker Recruitment committees, Contribution mentors, and scholarship mentors to ensure we are engaging the community, welcoming new people, and developing innovative events.

For the Drupal Association, our partners like Kuoni Congress, and the dedicated community volunteers who help make each event special, DrupalCon is a significant endeavor! With so many details needing attention, it’s a long process with many variables. It takes good organization skills to manage to put on a good event while balancing all these considerations! Truthfully, there will always be trade-offs. However, ensuring that the Drupal community can get together to share ideas and make new connections makes it all worth it!

The work of planning and leading an event is intense and can, at times, be grueling and difficult. For this reason, we want to celebrate our leaders like Von Eaton, who led an exceptional DrupalCon North America under complex pandemic circumstances (even as a fairly new Drupal Association employee!), as well as the team at Kuoni who have gone above and beyond to keep DrupalCon Europe engaging and alive throughout the pandemic.  Thank you for all you do.

And finally, to you, thanks for reading and for continuing to be a part of DrupalCon.  We’re looking forward to a fantastic DrupalCon in Prague!

#! code: Drupal 9: Creating A Category Menu Using Derivers

1 month 1 week ago

Derivers in Drupal are one of the ways in which you can inform Drupal about the presence of plugin types. This allows you to generate multiple custom types of a plugin so that it can be represented as multiple different plugins within the system.

Perhaps the most useful deriver example I have seen is the menu deriver. This allows us to use the Drupal plugin architecture to generate custom menu links.

If you want to create a menu link for your module then you would normally add them one at a time to a *.links.menu.yml file. This is an example of using the menu links plugin system to inform the menu system about the links you want to add.

For example, the core config module has a single option in the config.links.menu.yml file that adds the "Configuration synchronization" menu item to the administration menu. Here is the contents of that file.

config.sync: title: 'Configuration synchronization' description: 'Import and export your configuration.' route_name: config.sync parent: system.admin_config_development

Instead of doing this, we can use a menu deriver to tell the menu system to look at a class that will then inject menu links into the menu. This saves us from adding them one by one and also means we can dynamically create menu links without hard coding them into the *.links.menu.yml file.

In this article, I will look at setting up a menu deriver and then using that deriver to inject custom elements into the menu.

Setting Up A Menu Deriver

The first thing to do is setup a menu deriver class. This should implement a method called getDerivativeDefinitions() which will return the plugin derivatives. As we are calling this from the menu system we need to return an array of menu links from this method so that they are understood by the menu system.

Read more.

Droptica: How to Effectively Clip Photos? An Overview of the Crop API Drupal Module

1 month 2 weeks ago

When creating a website whose editors will be working on content with photos, a problem often arises: how to manage the images so that the editor doesn’t have to manually edit them when they want to publish the same content with a different look? In Drupal, we can hit this problem when creating new view modes for any entity with images. The purpose of view modes is to serve the same content in a different form than the default one. For text or date fields, we will use different formatters. And for images?

Crop API and UI modules using this API

Drupal Core allows us to crop images without additional modules, but this functionality isn’t flexible enough to suit all needs. The Crop API Drupal module provides a basic API for more customized cropping of images. Its interface is used, for example, by Image Widget Crop. It provides a UI for cropping images, using predefined cropping types. It’s very useful for websites that publish articles with images or for media management websites.

Another module using Crop API is Focal Point. It allows you to determine which part of the image is the most important. This fragment is used when cropping or scaling an image so that - for example - one of the people present doesn’t lose their head.

The Crop API Drupal module - general information

The module was released on 17 November 2014, its latest version 2.2 was published on 18 February 2022. The module is continuously actively supported and developed. It’s compatible with PHP 8.1 and with Drupal ^8.8 and 9.

Popularity of Crop API

The module has more than 90 thousand installations. There are more and more of them every week, with an increasing trend since the first release. The newer version 2.x took three years to displace the older version 1.x. Currently, the vast majority of websites using Crop API use the newer version.

Source: Drupal.org

Authors of Crop API

The original creator of the module is Janez Urevc (slashrsm). He works as a Senior Performance Engineer at Tag1 Consulting, where he develops and maintains web applications. He is an active member of the Drupal community - in 2014 he helped launch the media initiative for Drupal 8, where he worked with other community members to bring media in Drupal to an upgraded level.

The Crop API module is officially supported by MD Systems GmbH, and the main maintainers of the module, in addition to Janez, are Adam G-H (phenaproxima), Alexandre Mallet (woprrr), and the Drupal Media Team.

Installation

Crop API doesn’t require the installation of additional libraries. It only has dependencies on the Image and User modules, which are part of Drupal's core. The installation is therefore carried out in the standard way. As always, we recommend installing the module using Composer.

composer require 'drupal/crop:^2.2'

The module provides two new permissions: Administer crop settings, allowing you to manage basic Crop API settings, and Administer crop types, which allows you to add, delete and edit defined crop types.

 

Use of the Crop API module

As we pointed out in the introduction, the Crop API module alone doesn’t allow for much. It should be seen as an interface that other modules can use. Nevertheless, it has several configuration options that we’ll try to explain.

Crop API provides a new entity type - Crop type. In this entity, we define the crop types we want to use.

 

When adding a new crop type, we have the option to create several settings.

Soft limit stretches the image to achieve the given proportions. Hard limit resizes and cuts off part of the image.

Hooks

Crop API provides one additional hook: hook_crop_entity_provider_info_alter. With it, we can change the information about the entity provider, which is calculated by default in the class \Drupal\crop\Annotation\CropEntityProvider. In the hook, we have access to the $providers array. We can change it in order to, for example, edit the media provider title.

Extension modules

Crop API was created to serve as an interface that other modules can use. To obtain the full range of possibilities, it’s necessary to select one of the extension modules that best suits your needs in terms of functionality.

Image Widget Crop

The module provides a widget that allows the user to select one of the predefined crop types. It has a responsive mode for changing the type and for manual adjustment.

 

  Focal Point

The module allows us to specify the key point of an image, which will be treated as its center during the cropping process. If you’ve ever used an image whose important part for you has been cropped by scaling with a hard crop - this module will prove to be a salvation.

The Crop API Drupal module - summary

CropAPI is a useful tool that, in combination with supporting modules, provides customized functionality. The installation of this module is recommended if your website requires more flexible solutions than those available in the core.

Are you considering the choice of modules for your project? We’d be happy to suggest which tools would be most suitable for it. On a daily basis, we develop websites on Drupal and use a number of modules for this or create our own ones.

Pierce Lamb: Custom Workflow Orchestration in Python

1 month 2 weeks ago

(I put Custom Workflow Orchestration In Python in an Art Generator and the above popped out)

This post originally appeared on my employer VISO Trust’s blog. It is lightly edited and reproduced here.

On the Data & Machine Learning team at VISO Trust, one of our core goals is to provide Document Intelligence to the audit team. Every Document that passes through the system is subject to collection, parsing, reformatting, analysis, reporting and more. Every day, we work to expand this feature set, increase its accuracy and deliver faster results.

Why we needed workflow orchestration

There are many individual tasks executed which eventually result in what’s provided by Document Intelligence, including but not limited to:

  • Security Control Language Detections
  • Audit Framework Control ID Detections
  • Named Entity Extraction like organizations, dates and more
  • Decryption of encrypted pdfs
  • Translation of foreign language pdfs
  • Document Classification
  • Document Section Detection

Until our workflow orchestration implementation, the features listed above and more were all represented in code inside a single function. Over time, this function became unwieldy and difficult to read; snippets of ceremony, controls, logging, function calls and more sprinkled throughout. Moreover, this is one of the most important areas of our app where new features will be implemented regularly. So the need to clean this code up and make it easier to reason about became clear. Furthermore, execution inside this function occurred sequentially despite the fact that some of its function calls could occur in parallel. While in its current state, parallel execution isn’t required, we knew that in the near future, features in the roadmap would necessitate it. With these two requirements:

  • task execution that is easier to reason about and
  • the ability to execute in parallel

We knew we needed to either use an existing workflow orchestration tool or write it custom. We began with some rough analysis of what was going on in our main automation function, namely, we formalized each ‘step’ into a concept called Task and theorized on which Task’s could execute in parallel. At the time of the analysis, we had 11 ‘Tasks’ each of which required certain inputs and produced certain outputs; based on these inputs and outputs, we determined that a number could run in parallel. With this context, we reviewed some of the major open source python toolkits for workflow orchestration:

Both of these toolkits are designed for managing workflows that have tens, hundreds up to thousands of tasks to complete and can take days or weeks to finish. They have complex schedulers, user interfaces, failure modes, options for a variety of input and output modes and more. Our pipeline will reach this level of complexity someday, but with an 11 Task pipeline, we decided that these toolkits added too much complexity for our use. We resolved to build a custom workflow orchestration toolkit guided by the deep knowledge in these more advanced tools.

Our custom workflow orchestration

The first goal was to generalize all of the steps in our automation service into the concept of a Task. A few examples of a Task would be:

  • detecting a document’s language,
  • translating a foreign language document,
  • processing OCR results into raw text,
  • detecting keywords inside text,
  • running machine learning inference on text.

Just reading this list gives one a feel for how each Task is dependent on a previous Task’s output to run. Being explicit about dependencies is core to workflow orchestration, so the first step in our Task concept was defining what inputs a given Task requires and what outputs it will produce. To demonstrate Task’s, we will develop a fake example Task called DocClassifyInference, the goal of which is to run ML inference to classify a given document. Imagine that our model uses both images of the raw pdf file and the text inside it to make predictions. Our Task, then, will require the decrypted PDF and the paginated text of the pdf in order to execute. Further, when it’s complete it will write a file to S3 containing its results. Thus, the start of our example Task might look like:

https://medium.com/media/094d252043626f462dee2692b54f2b29/href

DocClassifyInference subclasses S3Task, an abstract class that enforces defining a method to write to s3. S3Task itself is a subclass of the Task class which enforces that subclasses define input keys, output keys and an execute method. The keys are enforced in a Pipeline class:

https://medium.com/media/767e8d6e412c0ec6b472df79028c536d/href

This Pipeline will become the object that manages state as our Tasks execute. In our case we were not approaching memory limits so we decided to keep much of the Task state in-memory though this could easily be changed to always write to and read from storage. As a state manager, the Pipeline can also capture ceremony prior to executing any Tasks that downstream Tasks may require.

Continuing on with DocClassifyInference, as a subclass of the abstract class Task, DocClassifyInference will have to implement def execute as well (enforced by Task). This method will take a Pipeline and return a Pipeline. In essence, it receives the state manager, modifies the state and returns it so the next Task can operate on it. In our example case, execute will extract the decrypted pdf and paginated text so they can be used as inputs for a ML model to perform document classification. Let’s look at the entire stubbed out DocClassifyInference:

https://medium.com/media/e70499b33e3aa2979d5713f967406298/href

It’s easy to see how DocClassifyInference gets the Pipeline state, extracts what it needs, operates on that data, sets what it has declared it’s going to set and returns the Pipeline. This allows for an API like this:

https://medium.com/media/c6c6331180e2c768bebe8b6d3d93e156/href

Which of course was much cleaner than what we had previously. It also lends itself to writing easy, understandable unit tests per Task as well as adhering more closely to functional programming principles. So this solves our first goal of making the code cleaner and more easy to reason about. What about parallel processing?

Parallel Processing

Similar to Luigi and Apache Airflow, the goal of our workflow orchestration is to generate a topologically sorted Directed Acyclic Graph of Tasks. In short, having each Task explicitly define its required inputs and intended output allows the Tasks to be sorted for optimal execution. We no longer need to write the Tasks down in sequential order like the API described above, rather we can pass a Task Planner a list of Tasks and it can decide how to optimally execute them. What we’ll want then is a Task Planner that is passed a List of Tasks, sorts the Tasks topologically and returns a list where each member is a list that contains Tasks. Let’s take a look at what this might look like using some of our examples from above:

https://medium.com/media/0a663e1d21fad19f6742fcb8626491c2/href

Here I have retained our examples while adding two new Tasks: KeywordDetection and CreateCSVOutput. You can imagine these like matching keywords in the paginated text and modifying the results of RunDocInference & KeywordDetection to create a formatted CSVOutput. When the Task Planner receives this list, we’ll want it to topologically sort the tasks and output a data structure that looks like this:

https://medium.com/media/1f13a65d01989925558a170c8f56b694/href

In the above List, you can imagine each of its members is a ‘stage’ of execution. Each stage has one-to-many Tasks; in the case of one, execution occurs sequentially and in the case of many, execution occurs in parallel. In english, the expected_task_plan can described like so:

  • DecryptPDF depends on nothing and creates a consumable PDF,
  • PaginatedText depends on a consumable PDF and creates a list of strings
    - RunDocInference depends on both and classifies the document
    - KeywordDetection depends on paginated text and produces matches
  • CreateCSVOutput depends on doc classification and keyword detection and produces a formatted CSV of their outputs.

An example of the function that creates the expected_task_plan above might look like:

https://medium.com/media/3c129c3f6c9c4e794bec10497bf706a7/href

This function gets the list of Tasks, ensures that no two Task outputs have identical keys, adds the nodes to a sorter by interrogating the Task input_keys and output_keys and sorts them topologically. In our case the sorter comes from graphlib’s TopologicalSorter which is described here. Getting into what each of these functions are doing would take us too far afield so we will move on to executing a task plan.

With the expected_task_plan shown above, an execute_task_plan() function is straightforward:

https://medium.com/media/b734c2dfa055816d3da58282a3a85802/href

Here we iterate over the task list deciding between sequential execution or parallel execution. In the latter case, we utilize python’s threading.Thread library to create a thread per task and use idiomatic methods for starting and joining threads. Wait, then what is TaskThread?

In our case, we wanted to ensure that an exception in a child thread will always be raised to the calling thread so the calling thread can exit immediately. So we extended the threading.Thread class with our own class called TaskThread. Overriding threading.Thread’s .run() method is fairly common (so common that it’s suggested in run()’s comments); we overrode run() to set an instance variable carrying an exception’s content and then we check that variable at .join() time.

https://medium.com/media/32dad77b5be2c90b1dc3e9ef857987cc/href

The calling thread can now try/except at .join() time.

Conclusion

With these structures in place, the file containing the automation service’s primary functions was reduced from ~500 lines to ~90. Now when we create our threadpool to consume SQS messages, we get the Task plan like so task_plan = get_task_plan() and pass the task_plan into each thread. Once execution reaches the main function for performing document intelligence, what previously was a large section of difficult-to-read code now becomes:

https://medium.com/media/86e2f5c586197e6dd85c8f3c16d22409/href

The introduction of parallel processing of these Task’s shaved consistent time off of performing document intelligence (an average of about a minute). The real benefit of this change, however, will come in the future as we add more and more Tasks to the pipeline that can be processed in parallel.

While we’ve reduced the time-to-audit significantly from the former state-of-the-art, we are definitely not done. Features like the above will enable us to continue reducing this time while maintaining consistent processing times. We hope this blog helps you in your workflow orchestration research.

Lullabot: Bitmasks in JavaScript: A Computer Science Crash Course

1 month 2 weeks ago

One of the nice things about front-end web development as a career choice is that the software and coding languages are available on every modern machine. It doesn’t matter what operating system or how powerful your machine is. HTML, CSS, and JavaScript will run pretty well on it. This lets a lot of us in the industry bypass the need for formal education in computer science.

Unfortunately, this also has the side effect of leaving little gaps in our knowledge here and there, especially in strategies like bitmasking, which are seldom used in web development.

Specbee: Marketo Webhook Integration with Drupal: Sync Lead Data from Marketo to Drupal in Real-Time

1 month 2 weeks ago
Marketo Webhook Integration with Drupal: Sync Lead Data from Marketo to Drupal in Real-Time Shefali Shetty 09 Aug, 2022

When the Association of National Advertisers (ANA) names “Personalization” the “marketing word of the year”, you can probably feel comfortable that it is a strategy that’s here to stay. Personalized content adds a human touch to the customer experience, something that is priceless throughout their journey. This is proven by stats that suggest 90% of consumers find personalized content more appealing and get annoyed when it is not. 

Marketing automation software giant, Marketo, helps B2B and B2C organizations engage and nurture potential leads while enabling marketers to create personalized marketing campaigns around them. 

Combining the power of Marketo with a content management system like Drupal is one of the best ways to present a completely seamless digital experience to customers. 

With Drupal - Marketo integration modules like the Marketo MA, you can automate lead capturing, tracking, nurturing, personalization, analytics, and much more. Now your Drupal website is also connected to different third-party services that will often need updated lead data from Marketo. Enter, Webhooks. In one of our recent projects, we used Webhooks to get real-time data from Marketo so that the content can be more personalized to the customer when they log in. Read more to find out about the Drupal - Marketo integration and how to configure a Webhook to synchronize Marketo data with Drupal in real-time.

Setting up Marketo in Drupal

Before moving on with setting up the Drupal - Marketo integration, note that this process assumes that you already have set up your Marketo account and you know how the platform works.

Installing the Marketo MA Drupal Module

On your Drupal admin setup, let’s go ahead and install the Marketo MA module from here. Next, go to Extend and enable the following modules (as shown in the below screengrab):

  • Marketo MA User
  • Marketo MA Webform
  • Marketo MA
API Configuration

Now, let’s activate your Marketo integration by entering the Marketo account ID and other lead capturing details. Here, we are going to use the REST API method to track lead data instead of Munchkin JavaScript API. So, go ahead and enter the REST API configuration settings like the Client ID and the Client Secret.

Field Definition

Here’s where you configure and map your user and Webform fields to the fields defined in your Marketo account (as shown in the below screengrab).

  User Settings

In this section, you can enable a trigger to update the lead in Marketo during events like a User login, registration/creation, and an update to the user profile. You can also choose the User fields that should trigger the update and map it to the Marketo field.

  Adding the Webform Handler

Now select the Marketo MA webform handler to make sure that the lead is captured via webforms and sent to Marketo.

 

This setup will now let you add lead capturing, tracking, and nurturing capabilities on your Drupal site. You are now ready to send leads from Drupal to your Marketo platform.

How to Configure a Webhook to get Updated Lead Data from Marketo to Drupal

Your leads can come in from different sources. Several of your leads come in through your website's webform, while others may be entered directly into Marketo's dashboard through different marketing channels. 

Sometimes, the user data that is captured and sent over from your Drupal site might get updated on the Marketo dashboard. What happens when you need real-time updated data from Marketo to personalize Drupal content for that user?

The Use Case

Recently, our client’s Drupal website required us to create a Webhook for their content personalization needs. They have a single sign-on system where their users can log in once and can access multiple site areas like events, member login, and shopping. Now after logging in, the content is personalized on the Drupal website based on content segmentations like demographics, job levels, etc. This needed our Drupal site to have updated user data that is synchronized in real-time with their Marketo system.

One not-very-feasible solution would be to make an API call to fetch get lead data from Marketo on user sign-in. However, not only is this method going to slow down the process, but it also proves more expensive as API requests are billed.

The Solution - Webhooks

Webhooks are basically API requests that are triggered by specific events. Marketo lets you register webhooks to connect to different third-party applications. For this use case, we configured a webhook to get real-time data from Marketo into the Drupal website. Let’s dive into the steps taken to implement webhooks for the Drupal Marketo integration.

Step 1: Create a custom module and define a route for API

First, you need to enable the HTTP Basic Authentication module in your Drupal setup.

marketo_webhook.routing.yml

marketo_webhook.webhook:   path: '/webhooks/marketo'   options:     _auth: [ 'basic_auth' ]   requirements:     _user_is_logged_in: 'TRUE'   defaults:     _controller: '\Drupal\marketo_webhook\Controller\MarketoWebhookController::getMarketoLeads'   methods: [POST] Step 2: Create a controller for the API and store the data in custom fields <?php namespace Drupal\marketo_webhook\Controller; use Drupal\Core\Controller\ControllerBase; use Drupal\Core\Entity\EntityTypeManagerInterface; use Symfony\Component\DependencyInjection\ContainerInterface; use Symfony\Component\HttpFoundation\JsonResponse; use Symfony\Component\HttpFoundation\Request; /** * Controller for Marketo Webhook. */ class MarketoWebhookController extends ControllerBase {  /**   * The entity type manager.   *   * @var \Drupal\Core\Entity\EntityTypeManagerInterface;   */  protected $entityTypeManager;  public function __construct(EntityTypeManagerInterface $entityTypeManager) {    $this->entityTypeManager = $entityTypeManager;  }  /**   * {@inheritdoc}   */  public static function create(ContainerInterface $container) {    return new static(      $container->get('entity_type.manager')    );  }  /**   * Update user marketo fields.   */  public function getMarketoLeads(Request $request) {    $payload = json_decode($request->getContent(), TRUE);    $payload_log = implode(',', $payload);    \Drupal::logger('marketo_webhook')->notice($payload_log);    if($payload){      if($payload['mail']){        $users = $this->entityTypeManager->getStorage('user')          ->loadByProperties(['mail' => $payload['mail']]);        $user = reset($users);        if ($user) {          if($payload['field_job_function'] != 'NA'){            $user->set('field_job_function',$payload['field_job_function']);          }          $user->save();          return JsonResponse::create('Success', 200);        }      }    }    return JsonResponse::create('Success', 400);  } } Step 3: Create the Webhook and Marketo Integration

But first, you will need to register the Webhook. To register the Webhook on Marketo, let’s first hop on to our Marketo dashboard and click on the Webhooks option under the Admin >> Integration menu (as shown in the below screengrab).

 

Next, create a New Webhook which will open up a dialog box where you can enter details like the Webhook Name, Description, URL, Request Type, Template, etc.

 

Give a name to the Webhook and an easily understandable description. Enter the URL to submit the web service request.

For example, here:
https://www.specbee.com/webhooks/marketo is the API endpoint for our webhook
Add the Drupal username and password for basic authentication like mentioned below:
https://username:[email protected]/webhooks/marketo

Click on the Insert Token button next to Template to add fields of the Marketo object that you want to pass with the request.
For example: "field_job_function": "{{lead.Job Function:default=NA}}". Set default value to any key of your choice. ‘NA’ in our case. This will return NA if there is no data.

  Step 4: Create a Smart Campaign

To create the Webhook Marketo Integration, you will now need to set up a Smart Campaign. You can define your own Smart campaigns in Marketo that will run Marketo programs like calling a Webhook, sending out emails after a certain event, etc. A Smart campaign configuration has three parts: Smart List, Flow, and Schedule. You will need to add the trigger to the Webhook under Smart List.

  • Under Marketing Activities and within your program, create a new Smart campaign. 
  • Give the Smart Campaign a name and a description. Here we have called it Drupal Integration.
  • Under Smart List, you will find all available Triggers. Drag and drop the triggers you need into the Smart List. Here, we have selected the Person is Created trigger but this will trigger only when a new lead is created. To solve this, let’s go ahead and add another trigger for Data value changes so that it gets fired when there is an update in the lead data.
  • We have selected the Job Function and Job Level attributes under Person to trigger the webhook (as shown in the below screengrab).

 

  • Now, time to call the Webhook. Click on Flow and select the Call Webhook flow action on the right pane and drag it to the Flow. Select the name of the Webhook you created.

 

  • Now that you have created the campaign to call the Webhook, let’s schedule it.

 

  • In the Smart Campaign Settings click on the Edit button to set how often you want the campaign to run. For our use case, we have selected “every time” as we wanted the webhook to fire up every time a lead data gets updated. Save that setting and click on ACTIVATE.
  Step 5: Test it out!

Your campaign is now ready to test. You will be able to see all the activities, that is, the number of calls to the Webhook and other details under the Results tab of the Smart campaign.

So ideally, when you create a new lead (Person) or update the Job level or Job Function field of an existing lead, it should call the Webhook and get the lead updated in your Drupal website’s database as well.

This article would not have been possible without Prashanth’s help! Thank you!

Conclusion

Marketing automation platforms like Marketo can be a valuable addition to any organization's marketing strategy to help engage, nurture, and ultimately convert leads. The use of Drupal as a content management system streamlines these activities. In this article, along with showing you how to integrate Marketo with Drupal, we have also covered how to configure webhooks that can let you get updated lead data from Marketo to Drupal. Need help customizing Drupal integrations with Marketo or any other third-party application? We’d be happy to help!

Author: Shefali Shetty

​​Meet Shefali Shetty, Director of Marketing at Specbee. An enthusiast for Drupal, she enjoys exploring and writing about the powerhouse. While not working or actively contributing back to the Drupal project, you can find her watching YouTube videos trying to learn to play the Ukulele :)

Drupal Drupal Development Drupal Integration Drupal 9 Drupal Planet Drupal Module Subscribe to our Newsletter Now Subscribe Leave this field blank

Leave us a Comment

  Recent Blogs Image Marketo Webhook Integration with Drupal: Sync Lead Data from Marketo to Drupal in Real-Time Image Building component-based websites on Drupal using Acquia Site Studio Image Setting up Responsive Images in Drupal 9 - A Step-by-Step Guide Want to extract the maximum out of Drupal? TALK TO US Featured Success Stories

Upgrading and consolidating multiple web properties to offer a coherent digital experience for Physicians Insurance

Upgrading the web presence of IEEE Information Theory Society, the most trusted voice for advanced technology

Great Southern Homes, one of the fastest growing home builders in the United States, sees greater results with Drupal 9

View all Case Studies

Talking Drupal: Talking Drupal #359 - Contribution Events

1 month 2 weeks ago

Today we are talking about Contribution Events.

www.talkingDrupal.com/359

Topics
  • What are contribution events
  • What is the contribution event
  • What are the key goals
  • Can you give us a quick overview of how you started teh community initiative
  • Why did each of you feel this was important
  • How did you get involved
  • What was involved in the first event
  • What were lessons learned
  • What were the successes of the first event
  • How can someone have a contribution event
  • Are there differences in having events centered on various areas
  • What are the most important resources
  • How can someone get involved
Resources Guests

Kristen Pol - www.drupal.org/u/kristen-pol @kristen_pol Surabhi Gokte - www.drupal.org/u/surabhi-gokte @SurabhiGokte

Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Ryan Price - ryanpricemedia.com - @liberatr

MOTW

Anonymous Login This is a very simple, lightweight module that will redirect anonymous users to the login page whenever they reach any admin-specified page paths, and will direct them back to the originally-requested page after successful login.

Matt Glaman: ReactPHP for Drupal deployments and workers

1 month 2 weeks ago

I recently held a live stream where I walked through the continuous integration and deployment (CI/CD) of a Drupal project to DigitalOcean's App Platform and other CI/CD items. App Platform has its quirks, but it's simple to build an application with various components. My project, Whiskey Dex, builds a Docker image that pushes to my container registry and then updates my App Platform manifest to use the new image tag, triggering a deployment. 

James Oakley: Keeping track of upstream security issues

1 month 3 weeks ago

Drupal no longer releases a new version of Core when an upstream dependency fixes a security vulnerability. It is the responsibility of site maintainers to keep track of security advisories for all such dependent libraries. That is no small task, and a way to automate this is needed. This post looks into how this can be done.

Blog Category: Drupal Planet

Evolving Web: The Wow Factor: Web Design Beyond the UX Phase

1 month 3 weeks ago

Have you ever been truly blown away by a website design? Think about the first time you saw parallax scrolling, a video background, or an immersive experience that feels both intuitive and fresh. Whatever it was, it probably made you say "wow, that's cool!"

What you may not have realized is that behind every "wow" moment is a designer who made strategic choices about what to include and what to leave out.

In this article, we'll explore how designers create websites that wow us, and how to use similar strategies in your digital projects.

Giving users a sense of “never seen before”

With over 1.93 billion websites, how do you ensure yours is remembered by your target audience? Creating unique and eye-catching content can make all the difference and help your website stand out from the crowd. 

Take our client, Alberta Municipalities for example. With a largely functional website, its main objective is to give members access to services and content they need. To catch the user’s attention, the website uses large typography to build a sense of the organization’s purpose and position the brand in a way that is both bold and subtle. The typography overlaps with the transitions from one content section to another, creating and blurring the divisions at the same time.

Alberta Municipalities - Giving users

Think about your brand when choosing images to use on your website. For Alberta Municipalities, we used strong and inspirational imagery focusing on people and community to reflect Alberta Municipalities' tagline “Strength in Members”. 

We also designed specific icons, image mosaics, buttons, and a mega-menu that refer back to the logo's squares and rectangles for added visual unity and consistency. Our visual design choices convey confidence, strength, collaboration, leadership, and professionalism.

Alberta Municipalities - We serve communities

When building a website, it’s important to keep in mind your brand values, business mission and target audience. For example, if your business's mission is to connect people, using warm, friendly visual designs and rounded typography will better suit your brand and thus help achieve your goals. On the other hand, if you are in the luxury market, you would better use limited design with desaturated colours, sharp and serif typography or work-for-it navigation to create a sense of exclusivity.

These small details all contribute to the overall impact of design, creating that “wow” feeling.

Alberta Municipalities - Together our new brand 

Leveraging design tricks to increase engagement 

Another way to make a lasting impression on the people visiting your website is to surprise them with uncommon shapes, colours, animation, or typography if your brand guidelines allow.

One of the latest design projects our team is proud of is the OCAD U Admissions website design. It features the abundant use of student-created art throughout the interface. As you browse, you’ll find surprising combinations of art in different styles, on the backdrop of the black and white OCAD U brand. While the art itself takes different forms and colours, it’s not disorderly. It’s placed using a predictable geometric pattern, like the black and white squares used to decorate the background.

OCAD U Admission website - Art design and media

The vision behind this design is to give potential students windows into the artistic pathways they can follow at OCAD U. While every student’s journey and every journey to learning design is different, the website helps potential students visualize what is possible.

OCAD U Admission website - Desktop view

Master emotional design for a better user experience

Emotional design is a design approach introduced by Donald Arthur Norman, co-founder of Nielsen Norman Group, in his popular book Emotional Design. This approach aims to create products that trigger a positive and memorable emotional response from the user. It stirs up feelings in the user through deliberate design choices. 

The emotions that a product generates can have a great impact on the users’ perception of the product. For example, when using Asana, the work management platform, if you move a task from “In progress” to “Done”, a unicorn crosses the screen to congratulate you for finishing your task. This animation gives a sense of accomplishment and has a great positive impact on the users’ motivation and engagement. 

When working on The Looking Forward website, we used a similar approach. The Looking Forward website aims to provide a self-service platform that empowers recovering cancer patients with resources to thrive and overcome the challenges they face in recovery.

The main goal was to use this platform to convey hope, love, and joy to inspire people to look forward. After experimenting with different concepts, we decided to go with an abstract concept.

The Looking Forward - Mobile view

 

The website does not show cute illustrations of people, as each patient is different and the audience is very broad. The colourful shapes and slow-animated colour gradients bring a sense of serenity and calm that likely appeals to recovering cancer patients. In this case, the “wow” is not created by something bold but by interactions that reflect the mood of the content.

 

The Looking Forward - Home page

Elevate your design using brand elements

Leveraging your brand elements can help enrich the user’s journey and the brand’s equity. A brand element is a tangible representation of your brand identity and character. It could take the form of a logo, typography, colour palette, messaging, illustrations, shapes or even a sound. Put together, brand elements help cultivate a cohesive brand identity.

For the McGill University SAFE website, we used the brand’s patterns, colours, and forward-moving arrows repeatedly throughout the digital design of the website. This reinforces the brand and builds a sense of consistency. 

SAFE - Safe is your exercise program towards healthy aging 

We kept the design simple to ensure the target audience, senior adults, can easily navigate the website to prevent any friction. While the design is simple, focusing users on the videos at their level, the use of custom, geometric icons based on the logo adds visual interest and a unique look that delights the user as they explore the content. The energy and sense of  motion creates a “wow” factor that propels the user forward.

 

Safe - Desktop view

When it comes to design, the possibilities are endless. So don't be afraid to think outside the box and explore all the possibilities for your project. With a little creativity, you can find the wow factor that will take your site to the next level.

Need help in (re)designing your website?

//--> //--> //-->

+ more awesome articles by Evolving Web
Checked
41 minutes 21 seconds ago
Drupal.org - aggregated feeds in category Planet Drupal
Subscribe to Drupal Planet feed