<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Voice-Cloning on Digital Archive Systems Tech Blog</title><link>https://tech.ldas.jp/en/tags/voice-cloning/</link><description>Recent content in Voice-Cloning on Digital Archive Systems Tech Blog</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 30 Apr 2026 06:00:00 +0900</lastBuildDate><atom:link href="https://tech.ldas.jp/en/tags/voice-cloning/index.xml" rel="self" type="application/rss+xml"/><item><title>ElevenLabs v2 vs v3 for Japanese Tech Narration — A/B Comparison Using a Voice-Cloned Synthetic Voice</title><link>https://tech.ldas.jp/en/posts/elevenlabs-v3-japanese-tech-narration/</link><pubDate>Thu, 30 Apr 2026 06:00:00 +0900</pubDate><guid>https://tech.ldas.jp/en/posts/elevenlabs-v3-japanese-tech-narration/</guid><description>&lt;blockquote>
&lt;p>This article is co-authored with generative AI. While I have cross-checked facts against official documentation where possible, errors may remain. Please verify primary sources before making important decisions.&lt;/p>&lt;/blockquote>
&lt;p>I ran an experiment to narrate technical blog articles with a synthetic voice cloned from my own speech. The audio is generated with &lt;a href="https://elevenlabs.io/">ElevenLabs&lt;/a> Voice Cloning + the v3 model (&lt;code>eleven_v3&lt;/code>, in alpha at the time of writing).&lt;/p>
&lt;p>This post records an A/B comparison of v2 (&lt;code>eleven_multilingual_v2&lt;/code>) and v3 on identical Japanese narration material, together with operational observations.&lt;/p></description></item></channel></rss>