Drupal Planet

Drupal Core News: Drupal core will adopt Gin admin theme to replace Claro

1 week 3 days ago

Drupal effectively has two default administration themes: Claro for core, and Gin for Drupal CMS. This causes difficulty for UX designers and product managers, because new features must work well with both themes.

Gin is no longer an experimental fork of Claro to experiment with new ideas. It has matured into a state-of-the-art admin theme, while Claro has fallen behind, as evident by the decision to use Gin as the admin theme for Drupal CMS. As a result, we feel it is time for Gin to become the default theme for Drupal core.

We are aspiring to have this work completed by November 2025 in order to get Gin into core for the release of 11.3 in December.

What's next?

A core-ready version of Gin will be developed outside of core in a 6.x branch of Gin. Our goal is for the Gin maintainers to collaborate with the Drupal Core Product, UX, Release, and Frontend Framework Managers to identify which issues are blockers for Gin in core.

Once the identified blockers are completed, the result would be merged into core for Drupal 11.3 by the beta deadline in November 2025. The most important step for including Gin in core is to remove its dependency on Claro, since Gin will replace Claro as the default admin theme.

Other work will include removing features that are not needed for core; simplifying the code now that Gin only needs to support the version of core that includes it; and other tasks like adding necessary test coverage to ensure a smooth transition from contrib to core.

What happens to Claro?

Although Claro will not be the default theme anymore for new sites, it will remain in Drupal 11 for use on existing sites. Claro is planned to be removed from core in Drupal 12, at which point it may become available as a contributed theme outside of core.

How can I get involved?

This is a big job with an ambitious timeline, so we will need many contributors to meet it. For contribution, you can get started with the two meta issues (#3530849 Gin 6.x and #3530852: Admin theme modernisation) to track this work, one for core tasks and one for Gin tasks. These will be updated and many new tasks created as the scope of work is clarified.

The Gin maintainers are also seeking sponsors for their time, which is a great way to contribute to this effort if you want to see this happen but are not able to work on tasks directly.

All those interested, please join us in the #admin-ui Drupal Slack channel for collaboration.

Evolving Web: Support Maya's Candidacy for the Drupal Association Board Elections

1 week 4 days ago

I’m resharing my recent interview with The Drop Times about my platform for the Drupal Association 2025 Board Election  because I want to give our community a clear look at where I stand, what I believe Drupal needs next, and how I’d approach my role if elected to the Drupal Association Board. From making events and contribution pathways more inclusive to strengthening Drupal’s brand while reaching new audiences, I hope these ideas spark conversation and reflect the practical, community-first mindset I’d bring to the board. 

Voting is now open and will be until July 11 23:59 UTC. You need to have been a Drupal Association Ripple Maker 24 hours before voting opened on June 18 in order to be eligible to vote. Ripple Makers be sure to look out for the email from the Drupal Association with your voter id, password and instructions. 

Below you will find my interview with Ben Beter of the DropTimes and my vision for the future of Drupal: 

1. You’ve welcomed 40% of attendees from outside Drupal at EvolveDrupal—what concrete steps would you take as a board member to ensure that those first-time or non-technical participants feel empowered to contribute beyond in-person events?

What we’ve seen at EvolveDrupal is that getting someone through the door is only step one. A good event might spark someone’s interest—but what they really need afterward are clear, approachable next steps.

If I were on the board, I’d push for things like regular beginner-friendly webinars, ideally led by agency folks who can break things down in a way that’s not intimidating. I’d also love to see more storytelling—hearing from people who work in marketing, UX, content, etc., who are already using Drupal, so newcomers can actually see themselves in this space.

On the practical side, we could build out a non-technical contributor guide on drupal.org. Right now, the site is very developer-focused, and it’s not always easy for someone from a design or communications background to know where they fit in.

I’d also like to see a post-event “what’s next” guide—a simple follow-up email with curated links, a breakdown of contribution options, and how to join the community or a working group. And at events like Digital Collegium or Educause, the DA should absolutely have a presence. These are the types of places where we can reach non-technical people who are using—or could use—Drupal every day.

 

2. Running events in six North American cities post-pandemic showed incredible momentum. How would you advise the Drupal Association to scale that model globally, balancing centralized strategy and local autonomy, without diluting what makes each gathering unique?

I think the key is to support, not control. What made EvolveDrupal work is that we kept things lightweight, flexible, and community-focused. The DA could definitely help other organizers by offering a basic playbook—just something that outlines what’s worked for us, especially around getting non-Drupal folks in the door, and tips for outreach and sponsorship.

Templates, branding assets, or just having a shared doc of what speakers or formats worked well—that would go a long way. But at the end of the day, each region should have the freedom to do what fits their audience. What works in Ottawa might not land in Lagos or São Paulo—and that’s okay.

Also, the DA could make a real difference by offering even small sponsorships or in-kind support—like signal boosts, intros to speakers, or even just showing up. And there’s already an event organizer group—I'd love to connect more with that crew and share what we’ve learned.

Lastly, I think it would be really helpful to build a community of practice—just a way for organizers around the world to share ideas, speakers, and lessons learned. That way, we keep the energy up without trying to clone the same event everywhere.

 

3. You aim to be a voice for marketers, designers, and communicators. What is one current board decision or policy that you believe doesn’t adequately engage non-technical contributors, and how would you address that imbalance if elected?

Honestly, I think a lot of DA programs are still geared toward developers. Even something like the "Ripple Maker" term—it’s clever, but it doesn’t really speak to folks in marketing or comms. And while I do think the Drupal.org rebrand was a great step forward, we’re still missing content that’s actually tailored to non-technical people.

For example, Supporting Partner benefits mostly focus on things like commit credits and developer recognition. If I were on the board, I’d push to also highlight contributions in areas like accessibility, UX, design systems, and community outreach.

We also need to be more present in non-Drupal spaces—but not just with a booth. Booths are fine, but what matters is what happens at them. If we show up to conferences like Educause or HighEdWeb, we should be doing live demos, sharing case studies, and talking to people on their terms. That part is often missing.

Another thing: we should ask agencies to help spread the word. I know GDPR limits how we share contact info, but we can still create referral-friendly content, like newsletter templates or event invites that agencies and end-user organizations—like universities—can pass on internally.

I also think DrupalCon could do more to intentionally invite marketers. Maybe a marketing track, or something like a dedicated Marketing & Comms Summit, just like we do for government or higher ed. And, of course, keeping event prices low is key to getting more diverse voices in the room.

 

4. Your growth of EvolveDrupal into EvolveDigital signals openness beyond Drupal. In the context of the Promote Drupal initiative, how would you ensure Drupal maintains brand identity and technical depth while engaging broader audiences and digital disciplines?

EvolveDigital isn’t about stepping away from Drupal—it’s about making space for the broader ecosystem around it. The reality is, Drupal doesn’t exist in a vacuum. It’s always used alongside other tools and strategies. So rather than trying to wall it off, we should lean into that and show how Drupal fits into modern digital teams.

I think Promote Drupal can support this with a two-track approach:

  1. Keep speaking to technical folks—architects, CTOs, site builders—with deep case studies and strong messaging around flexibility, scalability, and performance.
  2. At the same time, create sector-specific narratives for content teams, marketers, and digital strategists—stories that highlight accessibility, ownership, integration, and total cost of ownership.

We should also make sure that people from these roles are involved early in the process—not just reviewing messaging, but helping shape it.

At the end of the day, I don’t think widening the conversation weakens Drupal’s brand. If anything, it shows how versatile and future-proof it is. We just need to get better at telling that story to all the right people.

Conclusion

If I were on the board, I’d focus on helping Drupal grow beyond its current edges—by making the project more welcoming to non-technical contributors, easier to access for new users, and more visible to sectors that don’t yet realize how much Drupal can offer. My experience organizing EvolveDrupal (now EvolveDigital) has shown me what’s possible when we create spaces that are inclusive, cross-functional, and community-powered. I’d bring that same mindset to the DA—supporting meaningful events, clearer contribution pathways, and smarter communication that reflects the full diversity of our ecosystem.

In short, if elected, I would:

  • Help make drupal.org and DA programs more welcoming for non-developers—marketers, designers, strategists, and first-time users
  • Support the growth of regional events with playbooks, shared resources, and lightweight DA support
  • Bring community energy and real-world event experience to DA discussions and strategic planning
  • Help evolve Promote Drupal to speak to a broader audience—while protecting Drupal’s depth and technical strengths
  • Push for more recognition and visibility for non-code contributions like accessibility, UX, content, and outreach
  • Work with product marketers from Drupal Association, Acquia, Pantheon and other community leaders to help define and communicate Drupal’s strengths
  • Build on the progress of the Drupal.org redesign and the brand refresh that’s already underway, (using my marketing background to keep this momentum) and help deliver a more cohesive, engaging Drupal brand and website for everyone

I’m excited about what’s ahead for Drupal—and I’d love to help shape that future from a place of inclusion, energy, and practical momentum.

 If my vision resonates with you, I’d be honoured to have your support in this election.

+ more awesome articles by Evolving Web

The Drop Times: SPARC: Solar Powered Advanced Renewable Control

1 week 4 days ago
Jasper Lammens explains how SPARC helps homeowners optimize solar energy use by combining Solcast forecasts, InfluxDB storage and Drupal ECA rules to send Discord notifications with ideal appliance schedules. Its best-fit, worst-fit and first-fit algorithms boost on-site solar consumption and cut grid reliance.

DrupalCon News & Updates: DrupalCon Vienna 2025: Where Business Meets Technology

1 week 4 days ago

In today’s fast-paced digital economy, successful organizations must align their technical architecture with strategic business goals. DrupalCon Vienna 2025 isn’t just a gathering of developers, it's a dynamic intersection where business leaders, marketers, technical architects, and product managers come together to explore how Drupal drives innovation and delivers measurable impact.

Whether you're launching global digital platforms, scaling personalization, improving time to market, or enhancing user experience   this year’s DrupalCon is where you’ll find the insights, tools, and partners to make it happen.

 

 

 

Bridging the Gap Between Vision and Execution

Modern digital strategies depend on a solid technical foundation. At DrupalCon Vienna, you’ll hear directly from digital leaders and enterprise teams who have successfully:

  • Modernized outdated platforms

     
  • Integrated Drupal with CRM, CDP, and analytics tools

     
  • Delivered omnichannel content at scale

     
  • Aligned marketing, IT, and product teams for cohesive execution

     

Case studies, panel discussions, and solution showcases will provide practical takeaways that go beyond buzzwords from project planning and governance to content strategy and performance optimization.

 

Drupal as a Strategic Business Platform

Drupal has evolved far beyond a CMS. It's a flexible, enterprise-ready digital experience platform (DXP) capable of powering everything from campaign websites to enterprise portals. At DrupalCon Vienna 2025, sessions will explore:

  • Composability: How modular architecture lets businesses adapt quickly

     
  • Integration-first thinking: Making Drupal the backbone of your digital ecosystem

     
  • Scalability: Supporting millions of users while maintaining performance and agility

     
  • Data and analytics: Building data-driven content strategies

     

If you're leading digital transformation efforts, this is your opportunity to see how Drupal is enabling organizations to innovate with confidence.

 

Business + Tech Networking That Matters

DrupalCon Vienna provides unmatched opportunities for networking with people who share your challenges and ambitions  from digital directors and CTOs to agency leaders and product strategists.

You’ll meet:

  • Enterprise users running Drupal at scale

     
  • Solution architects building custom platforms

     
  • Technology vendors and service providers

     
  • Decision-makers planning their next investment

     

These conversations often spark partnerships, ideas, and collaborations that last long after the event ends.

Workshops and Strategy Sessions

For teams focused on long-term planning, DrupalCon offers curated content designed to guide your strategic roadmap. Attend workshops and sessions on:

  • Project architecture for growth and flexibility

     
  • Governance models for distributed teams

     
  • Digital asset management, personalization, and localization

     
  • Open source procurement and risk mitigation

     

Whether you're managing your first Drupal rollout or optimizing a mature platform, there’s content here for every stage of your digital journey.

 

Ready to Align Your Business and Tech Strategy?

DrupalCon Vienna 2025 is more than a conference, it's a catalyst for growth, innovation, and alignment. Join us to discover how forward-thinking organizations are leveraging Drupal to unlock digital success across departments and markets.

 

Mark Your Calendars

🗓️ Dates: October 14–17, 2025
 📍 Location: Austria Center Vienna, Vienna, Austria
 🌐 Official Website & Registration: https://events.drupal.org/vienna2025/registration-information
 🐦 Follow the buzz: #DrupalConVienna #DrupalCon2025

Stay Tuned!

This blog is just the beginning. Over the next few weeks, I’ll be sharing:

  • Technical spotlights on Drupal CMS features
  • Speaker highlights and session previews
  • Tips for first-time technical attendees and contributors

     

So bookmark this space, and get ready to experience DrupalCon Vienna 2025 like never before.

Are you coming? Let’s connect!

 

By Iwantha Lekamge

Technical Lead
WSO2

The Drop is Always Moving: Drupal 11.2.0 improves backend and frontend performance and scalability, completes the introduction of OOP support of hooks, adds JSON Schema support, includes AVIF image format capability, supports SDC variants, and more…

1 week 4 days ago

Drupal 11.2.0 improves backend and frontend performance and scalability, completes the introduction of OOP support of hooks, adds JSON Schema support, includes AVIF image format capability, supports SDC variants, and more. https://www.drupal.org/blog/drupal-11-2-0

Drupal blog: Drupal 11.2.0 is now available

1 week 4 days ago
New in Drupal 11.2

The second feature release of Drupal 11 improves backend and frontend performance and scalability, completes the introduction of OOP support of hooks, adds JSON Schema support, includes AVIF image format capability, supports SDC variants, and more.

Extension and site installation is three to four times as fast as Drupal 11.1.0

Thanks to various optimizations to container rebuilding and the installer, installing Drupal itself or extensions is now three to four times as fast. There are similar improvements when using the user interface, but it is more apparent when using Drush. In this video, we show Drupal 11.2 installing 60 modules in 5.7 seconds while Drupal 11.1 takes four times as much to do the same:

.module files are not needed anymore!

Starting with Drupal 11.2, the last APIs that needed .module files can be implemented as object-oriented hooks too! Developers can make use of [#RemoveHook] attributes to remove hooks, [#ReOrderHook] to change hook ordering and #[Preprocess] attributes to declare object-oriented preprocess hooks. Now there is no need for a .module file if all of the hooks are on classes in the Hook namespace.

Built-in JSON Schema generation for content entities

When working with Drupal entities over an API, it is important for developer experience to have a schema for the data structure of a particular entity. This allows clients to know, for instance, what acceptable values may be sent or received for the value and format properties of a formatted text field.

Drupal core can now generate JSON Schemas for content entity types. The typed data, serialization and field APIs have been enhanced to allow field-level schemas to be generated based on their storage configuration.

All field types shipped by core now provide JSON Schemas out of the box through their default normalizers. In addition, all the core typed data plugins provide JSON Schemas as well. This means that all core fields can generate JSON Schemas for their properties out of the box. Additionally, most field types provided by contributed projects or custom modules will generate JSON Schemas automatically so long as they do not provide a custom normalizer or depend on non-core typed data plugins.

Native variant support added to Single-Directory Components

In design systems, a variant allows grouping multiple component properties into a predefined set. The variant can then be used as a shortcut to render a component. Front-end developers could previously define a variant as a prop, but this approach did not support custom titles or descriptions to convey the variant’s purpose.
Now, you can use variants as a property at the root of your component declaration:

name: Card variants: primary: title: Primary description: ... secondary: title: Secondary description: ... props: {} slots: {} AVIF support added with fallback to WebP

Drupal 11.2 now supports AVIF in our image toolkit. AVIF offers better compression and image quality than WebP, especially for high-resolution images and HDR content. However, not all servers support conversion to AVIF. For that reason, a fallback mechanism was added to convert to WebP when AVIF support is not available.

CSS page weight improvements

Drupal core has long supported component-based CSS organization and conditional loading that depends on page elements. Using this system, the default CSS added to every page by Drupal core has been reduced from around 7 KB to 1 KB. This will improve bandwidth requirements and page rendering times for all but the most highly customized sites running on 11.2.

Navigation improvements

The modern Navigation module now automatically enables the built-in top bar functionality as well. An "overview" link is now shown when a menu item is a container for child items, making it easier to find the right page. Numerous other blockers have also been resolved, and this experimental module is close to becoming stable in a future minor release.

Recipe dependencies are now unpacked

Drupal recipes are special Composer packages designed to bootstrap Drupal projects with necessary dependencies. When a recipe is required, a new Composer plugin "unpacks" it by moving the recipe's dependencies directly into your project's root composer.json, and removes the recipe as a project dependency. This makes it possible to update those dependencies later and to not have the recipe as an active dependency of the site anymore.

Changes to Update Status module to better support modern workflows

Update Status now checks the status of uninstalled extensions, making your site even more secure.
Updating themes and modules in the Update Status module with authorize.php was not Composer-aware. This could cause various serious problems for sites and site deployment workflows. Therefore, this legacy feature has now been removed. Projects should generally be updated on the command line with Composer. The experimental Update Manager (Automatic Updates) will also be used for this in the future.

Cache efficiency improvements

Significant improvements have been made to Drupal's render cache performance due to optimizations in placeholder processing and cache tag invalidation checks. This results in smaller cache entries with fewer cache dependencies in the dynamic page cache, leading to higher cache hit rates and reduced cache storage requirements. The reduction in cache tag lookups reduces round trips to persistent cache storage backends on every HTML response. This applies whether the cache tag backend is using database, memcache, or redis, and leads to slightly faster page rendering performance on both dynamic page cache hits and misses. There is also a significant reduction in queries per second (QPS) for high-traffic sites, which should allow caching servers to handle more traffic with lower hardware requirements.

PHPUnit 11 support added

PHPUnit 11 can now be used for testing. While the default version remains PHPUnit 10, it's possible to update to PHPUnit with the command composer update phpunit/phpunit --with-dependencies. Drupal core testing on PHP 8.4 requires PHPUnit 11 as a minimum.

Core maintainer team updates

Since Drupal 11.1, Emma Horrel and Cristina Chumillas were announced as UX Managers.

Griffyn Heels joined as a provisional Core Leadership Team Facilitator. Juraj Nemec and Drew Webber were added as general core committers, and Pierre Dureau was added as a provisional Frontend Framework Manager. Check out their announcement.

Six people stepped up to become subsystem maintainers! Nic Laflin became a maintainer of the Extension API, Lee Rowlands became a co-maintainer of the Form and Render APIs, Adam Bramley became maintainer of Node module, Jean Valverde became a co-maintainer of Single-Directory Components. Mark Conroy became the maintainer of the Stable 9 theme and Brad Jones became a co-maintainer of Serialization. Many of the improvements above are thanks to leadership from these new maintainers!

Three subsystem maintainers stepped back. We thank Claudiu Cristea, Christian Fritsch, and Daniel Wehner for their immense contributions.

Finally, there have also been changes in the mentoring coordinator team: James Shields joined, while Mauricio Dinarte, AmyJune Hineline and Tara King stepped back from the role. Many Drupal contributors are thankful to have been mentored by them!

Drupal 10.5 is also available

The next maintenance minor release of Drupal 10 has also been released. Drupal 10 will be supported until December 9, 2026, after the release of Drupal 12. Long-term support for Drupal 10 is managed with a new maintenance minor release every 6 months that receives twelve months of support. This allows the maintenance minor to adapt to evolving dependencies. It also gives more flexibility for sites to move to Drupal 11 when they are ready.

This release schedule allows sites to move from one LTS version to the next if that is the best strategy for their needs. For more information on maintenance minors, read the previous post on the new major release schedule.

Want to get involved?

If you are looking to make the leap from Drupal user to Drupal contributor, or you want to share resources with your team as part of their professional development, there are many opportunities to deepen your Drupal skill set and give back to the community. Check out the Drupal contributor guide. Join us at DrupalCon Vienna in October 2025 or DrupalCon Nara in November 2025 to attend sessions, network, and enjoy mentorship for your first contributions.

eiriksm.dev: Drupal deployment confidence part 2: Composer validate

1 week 5 days ago

This is part 2 of a series on having CI pipelines for your Drupal site, and building Drupal Deploy Confidence. In this part of our series, we’re looking at another low-hanging fruit: composer validate. It's quick to set up and can save your team from subtle but frustrating dependency issues.

What does composer validate actually do?

According to the official Composer documentation, this command:

...checks if your composer.json is valid. If a composer.lock exists, it will also check if it is up to date with the composer.json.

That might sound trivial, but in practice it solves a common and annoying problem: when the lock file is out of sync with composer.json.

You’ve probably seen this warning:

Warning: The lock file is not up to date with the latest changes in composer.json. You may be getting outdated dependencies. Run update to update them.

In a CI context, composer validate will fail with a non-zero exit code if this mismatch is found. That’s a good thing - it prevents developers from merging code that might lead to confusion, instability, or surprise dependency changes.

Why you should care

Here’s why we make composer validate a standard in CI pipelines:

1. It prevents outdated or unexpected dependencies
When the lock file is behind composer.json, your project might not actually install the packages you think it will. That’s a recipe for bugs that are hard to trace, especially in production.

Failing early in CI ensures that your dependencies are exactly what you committed to - nothing more, nothing less.

2. It avoids misleading advice
The warning suggests you can simply run composer update to fix the mismatch. But unless you're deliberately updating all your dependencies, this is often bad advice.

Running composer update without constraints may introduce unintended package upgrades, which might either break other parts of the system, introduce regressions or complicate debugging.

By enforcing a clean lock file, you avoid needing to deal with this situation altogether.

Optional strictness

By default composer validate will also check things like requiring a name or description in your composer.json. This is particularly useful for packages intended for public reuse. For regular web projects, we usually skip these extra rules - they don’t add much value in a project context. In practice, that means the validate of our CI pipeline runs this command:

composer validate --no-check-version --no-check-publish

To make sure this is in fact literally the same on all platforms, we add a wrapper script to composer.json, so we can simply run 

composer ci:validate Conclusion

If you have followed the parts so far, you should now have 2 CI jobs in your pipeline. Please see the below links for both jobs in all 3 major version control providers:

Direct link to pipeline for Bitbucket: https://bitbucket.org/eirikmorland/drupal-confidence/src/part-2/bitbucket-pipelines.yml
Direct link to pipeline for GitLab: https://gitlab.com/eiriksm/drupal-confidence/-/blob/part-2/.gitlab-ci.yml?ref_type=heads
Direct link to pipeline for GitHub: https://github.com/eiriksm/drupal-confidence/blob/part-2/.github/workflows/test.yml

Drupalize.Me: SEO for Drupal Users: What You Need to Know

1 week 5 days ago
SEO for Drupal Users: What You Need to Know

When I was writing documentation for Drupal CMS’s SEO Tools recommended add-on (aka “recipe”), I realized that not all Drupal site users may be up-to-date on the essentials of SEO and how Drupal can help you make your site discoverable by your target audiences.

While Drupal has long been a solid foundation for building search-friendly websites — that doesn’t mean every Drupal user knows where to start with SEO.

In this post, I’ll define essential SEO concepts and highlight best practices direct from the documentation I wrote about promoting your site with SEO for the Drupal CMS User Guide.

Whether you're configuring a custom Drupal 11 site or using Drupal CMS, these tips apply to you. All of the tools mentioned below may be installed on any Drupal 11 site or installed via the SEO Tools recommended add-on in Drupal CMS.

Amber Matz Wed, 06/18/2025 - 14:41

Dries Buytaert: Automating alt-text generation with AI

1 week 5 days ago

Billions of images on the web lack proper alt-text, making them inaccessible to millions of users who rely on screen readers.

My own website is no exception, so a few weeks ago, I set out to add missing alt-text to about 9,000 images on this website.

What seemed like a simple fix became a multi-step challenge. I needed to evaluate different AI models and decide between local or cloud processing.

To make the web better, a lot of websites need to add alt-text to their images. So I decided to document my progress here on my blog so others can learn from it – or offer suggestions. This third post dives into the technical details of how I built an automated pipeline to generate alt-text at scale.

[newsletter-blog] High-level architecture overview

My automation process follows three steps for each image:

  1. Check if alt-text exists for a given image
  2. Generate new alt-text using AI when missing
  3. Update the database record for the image with the new alt-text

The rest of this post goes into more detail on each of these steps. If you're interested in the implementation, you can find most of the source code on GitHub.

Retrieving image metadata

To systematically process 9,000 images, I needed a structured way to identify which ones were missing alt-text.

Since my site runs on Drupal, I built two REST API endpoints to interact with the image metadata:

  • GET /album/{album-name}/{image-name}/get – Retrieves metadata for an image, including title, alt-text, and caption.
  • PATCH /album/{album-name}/{image-name}/patch – Updates specific fields, such as adding or modifying alt-text.

I've built similar APIs before, including one for my basement's temperature and humidity monitor. That post provides a more detailed breakdown of how I build endpoints like this.

This API uses separate URL paths (/get and /patch) for different operations, rather than using a single resource URL. I'd prefer to follow RESTful principles, but this approach avoids caching problems, including content negotiation issues in CDNs.

Anyway, with the new endpoints in place, fetching metadata for an image is simple:

[code bash]curl -H "Authorization: test-token" \ "https://dri.es/album/isle-of-skye-2024/journey-to-skye/get"[/code]

Every request requires an authorization token. And no, test-token isn't the real one. Without it, anyone could edit my images. While crowdsourced alt-text might be an interesting experiment, it's not one I'm looking to run today.

This request returns a JSON object with image metadata:

[code bash]{ "title": "Journey to Skye", "alt": "", "caption": "Each year, Klaas and I pick a new destination for our outdoor adventure. In 2024, we set off for the Isle of Skye in Scotland. This stop was near Glencoe, about halfway between Glasgow and Skye." } [/code]

Because the alt-field is empty, the next step is to generate a description using AI.

Generating and refining alt-text with AI

In my first post on AI-generated alt-text, I wrote a Python script to compare 10 different local Large Language Models (LLMs). The script uses PyTorch, a widely used machine learning framework for AI research and deep learning. This implementation was a great learning experience.

The original script takes an image as input and generates alt-text using multiple LLMs:

[code bash]./caption.py journey-to-skye.jpg { "image": "journey-to-skye.jpg", "captions": { "vit-gpt2": "A man standing on top of a lush green field next to a body of water with a bird perched on top of it.", "git": "A man stands in a field next to a body of water with mountains in the background and a mountain in the background.", "blip": "This is an image of a person standing in the middle of a field next to a body of water with a mountain in the background.", "blip2-opt": "A man standing in the middle of a field with mountains in the background.", "blip2-flan": "A man is standing in the middle of a field with a river and mountains behind him on a cloudy day.", "minicpm-v": "A person standing alone amidst nature, with mountains and cloudy skies as backdrop.", "llava-13b": "A person standing alone in a misty, overgrown field with heather and trees, possibly during autumn or early spring due to the presence of red berries on the trees and the foggy atmosphere.", "llava-34b": "A person standing alone on a grassy hillside with a body of water and mountains in the background, under a cloudy sky.", "llama32-vision-11b": "A person standing in a field with mountains and water in the background, surrounded by overgrown grass and trees." } }[/code]

My original plan was to run everything locally for full control, no subscription costs, and optimal privacy. But after testing 10 local LLMs, I changed my mind.

I knew cloud-based models would be better, but wanted to see if local models were good enough for alt-texts. Turns out, they're not quite there. You can read the full comparison, but I gave the best local models a B, while cloud models earned an A.

While local processing aligned with my principles, it compromised the primary goal: creating the best possible descriptions for screen reader users. So I abandoned my local-only approach and decided to use cloud-based LLMs.

To automate alt-text generation for 9,000 images, I needed programmatic access to cloud models rather than relying on their browser-based interfaces — though browser-based AI can be tons of fun.

Instead of expanding my script with cloud LLM support, I switched to Simon Willison's llm tool: https://llm.datasette.io/. llm is a command-line tool and Python library that supports both local and cloud-based models. It takes care of installation, dependencies, API key management, and uploading images. Basically, all the things I didn't want to spend time maintaining myself.

Despite enjoying my PyTorch explorations with vision language models and multimodal encoders, I needed to focus on results. My weekly progress goal meant prioritizing working alt-text over building homegrown inference pipelines.

I also considered you, my readers. If this project inspires you to make your own website more accessible, you're better off with a script built on a well-maintained tool like llm rather than trying to adapt my custom implementation.

Scrapping my PyTorch implementation stung at first, but building on a more mature and active open-source project was far better for me and for you. So I rewrote my script, now in the v2 branch, with the original PyTorch version preserved in v1.

The new version of my script keeps the same simple interface but now supports cloud models like ChatGPT and Claude:

[code bash]./caption.py journey-to-skye.jpg --model chatgpt-4o-latest claude-3-sonnet --context "Location: Glencoe, Scotland" { "image": "journey-to-skye.jpg", "captions": { "chatgpt-4o-latest": "A person in a red jacket stands near a small body of water, looking at distant mountains in Glencoe, Scotland.", "claude-3-sonnet": "A person stands by a small lake surrounded by grassy hills and mountains under a cloudy sky in the Scottish Highlands." } }[/code]

The --context parameter improves alt-text quality by adding details the LLM can't determine from the image alone. This might include GPS coordinates, album titles, or even a blog post about the trip.

In this example, I added "Location: Glencoe, Scotland". Notice how ChatGPT-4o mentions Glencoe directly while Claude-3 Sonnet references the Scottish Highlands. This contextual information makes descriptions more accurate and valuable for users. For maximum accuracy, use all available information!

Updating image metadata

With alt-text generated, the final step is updating each image. The PATCH endpoint accepts only the fields that need changing, preserving other metadata:

[code bash]curl -X PATCH \ -H "Authorization: test-token" \ "https://dri.es/album/isle-of-skye-2024/journey-to-skye/patch" \ -d '{ "alt": "A person stands by a small lake surrounded by grassy hills and mountains under a cloudy sky in the Scottish Highlands.", }' [/code]

That's it. This completes the automation loop for one image. It checks if alt-text is needed, creates a description using a cloud-based LLM, and updates the image if necessary. Now, I just need to do this about 9,000 times.

Tracking AI-generated alt-text

Before running the script on all 9,000 images, I added a label to the database that marks each alt-text as either human-written or AI-generated. This makes it easy to:

  • Re-run AI-generated descriptions without overwriting human-written ones
  • Upgrade AI-generated alt-text as better models become available

With this approach I can update the AI-generated alt-text when ChatGPT 5 is released. And eventually, it might allow me to return to my original principles: to use a high-quality local LLM trained on public domain data. In the mean time, it helps me make the web more accessible today while building toward a better long-term solution tomorrow.

Next steps

Now that the process is automated for a single image, the last step is to run the script on all 9,000. And honestly, it makes me nervous. The perfectionist in me wants to review every single AI-generated alt-text, but that is just not feasible. So, I have to trust AI. I'll probably write one more post to share the results and what I learned from this final step.

Stay tuned.

Dries Buytaert: If a note can be public, it should be

1 week 5 days ago

A few years ago, I quietly adopted a small principle that has changed how I think about publishing on my website. It's a principle I've been practicing for a while now, though I don't think I've ever written about it publicly.

The principle is: If a note can be public, it should be.

It sounds simple, but this idea has quietly shaped how I treat my personal website.

I was inspired by three overlapping ideas: digital gardens, personal memexes, and "Today I Learned" entries.

Writers like Tom Critchlow, Maggie Appleton, and Andy Matuschak maintain what they call digital gardens. They showed me that a personal website does not have to be a collection of polished blog posts. It can be a living space where ideas can grow and evolve. Think of it more as an ever-evolving notebook than a finished publication, constantly edited and updated over time.

I also learned from Simon Willison, who publishes small, focused Today I Learned (TIL) entries. They are quick, practical notes that capture a moment of learning. They don't aim to be comprehensive; they simply aim to be useful.

And then there is Cory Doctorow. In 2021, he explained his writing and publishing workflow, which he describes as a kind of personal memex. A memex is a way to record your knowledge and ideas over time. While his memex is not public, I found his approach inspiring.

I try to take a lot of notes. For the past four years, my tool of choice has been Obsidian. It is where I jot things down, think things through, and keep track of what I am learning.

In Obsidian, I maintain a Zettelkasten system. It is a method for connecting ideas and building a network of linked thoughts. It is not just about storing information but about helping ideas grow over time.

At some point, I realized that many of my notes don't contain anything private. If they're useful to me, there is a good chance they might be useful to someone else too. That is when I adopted the principle: If a note can be public, it should be.

So a few years ago, I began publishing these kinds of notes on my site. You might have seen examples like Principles for life, PHPUnit tests for Drupal, Brewing coffee with a moka pot when camping or Setting up password-free SSH logins.

These pages on my website are not blog posts. They are living notes. I update them as I learn more or come back to the topic. To make that clear, each note begins with a short disclaimer that says what it is. Think of it as a digital notebook entry rather than a polished essay.

Now, I do my best to follow my principle, but I fall short more than I care to admit. I have plenty of notes in Obsidian that could have made it to my website but never did.

Often, it's simply inertia. Moving a note from Obsidian to my Drupal site involves a few steps. While not difficult, these steps consume time I don't always have. I tell myself I'll do it later, and then 'later' often never arrives.

Other times, I hold back because I feel insecure. I am often most excited to write when I am learning something new, but that is also when I know the least. What if I misunderstood something? The voice of doubt can be loud enough to keep a note trapped in Obsidian, never making it to my website.

But I keep pushing myself to share in public. I have been learning in the open and sharing in the open for 25 years, and some of the best things in my life have come from that. So I try to remember: if notes can be public, they should be.

The Drop Times: A Look Under the Hood of Lupus Decoupled Drupal

1 week 6 days ago
Forget the usual trade-offs of headless architecture. Lupus Decoupled keeps Drupal’s powerful backend features intact while giving developers full control on the frontend. In this exclusive interview, Wolfgang Ziegler of Drunomics breaks down how the system works, why it matters, and what’s coming next for the project that’s redefining decoupled Drupal.

Metadrop: Metadrop April 2025: new releases for Drupal ecosystem, privacy and content editorial experience

1 week 6 days ago

In March, Metadrop continued its contributions to the Drupal ecosystem with a particular focus on privacy and content editorial experience. The team released new modules, updated existing ones, added integrations, and assisted clients with some internal issues not directly related to Drupal, while still having time do research on AI.

New modules and releases Iframe Consent

We developed a new module to manage IFrame consent, ensuring GDPR-compliant handling of embedded iframes by loading third-party content only after obtaining user consent. This effort enhances privacy in addition to existing modules like EXIF Removal and Youtube Cookies.

Watchdog Statistics 1.0.6

The release of version 1.0.6 added date filters, enabling users to generate reports from previous months and display log statistics for the last month — an…

Checked
52 minutes 13 seconds ago
Drupal.org - aggregated feeds in category Planet Drupal
Drupal Planet feed