InspireMusic - 阿里通义实验室开源音乐生成框架 支持音乐、歌曲、音频生成 本地一键整合包下载
InspireMusic 是阿里通义实验室开源的一个用于音乐生成的统一框架,旨在打造一个集音乐、歌曲及音频生成能力于一体的开源工具包,为研究者、开发者及音乐爱好者提供一个全面的创作平台。
InspireMusic 不仅为研究者和开发者提供了丰富的音乐/歌曲/音频生成模型的训练和调优工具,还为他们配备了高效的模型,以便优化生成效果。同时,这款工具包也大大降低了音乐创作的门槛,使得音乐爱好者能够通过简单的文字描述或音频提示,轻松生成多样化的音乐作品。
InspireMusic 的文生音乐创作模式涵盖了多种曲风、情感表达和复杂的音乐结构控制,提供了极大的创作自由度和灵活性。
主要特点:
统一的音频生成框架:基于音频大模型技术,InspireMusic支持音乐、歌曲及音频的生成,为用户提供多样化选择;
灵活可控生成:基于文本提示和音乐特征描述,用户可精准控制生成音乐的风格和结构;
简单易用:简便的模型微调和推理工具,为用户提供高效的训练与调优工具。
使用教程:(建议N卡,显存12G起,CUDA12.4)
默认只下载了一个InspireMusic-1.5B-Long模型,需要其他模型,切换到该模型,点击生成会自动下载。以下是五种模型介绍,
InspireMusic-Base-24kHz:预先训练的音乐生成模型,24kHz 单声道,最长支持30 秒
InspireMusic-Base:预训练的音乐生成模型,48kHz,最长支持30秒
InspireMusic-1.5B-24kHz:预训练的音乐生成 1.5B 模型,24kHz 单声道,最长支持30 秒
InspireMusic-1.5B:预训练的音乐生成 1.5B 模型,48kHz,最长支持30秒
InspireMusic-1.5B-Long:预训练音乐生成 1.5B 模型,48kHz,支持 5 分钟以上的长格式音乐生成
1、输入简单的文本描述生成音乐
如输入提示词 The instrumental piece exudes a playful and whimsical atmosphere, likely featuring lively and rhythmic elements. The music seems to be inspired by nature and animals, creating an engaging and light-hearted experience.
“这首器乐作品散发出俏皮和异想天开的氛围,可能具有活泼和有节奏的元素。音乐似乎受到了大自然和动物的启发,创造了一种引人入胜、轻松愉快的体验。”
同时支持中文输入
2、通过不同的音乐类型、曲式结构标签来控制生成音乐
如 曲式结构:<|Chorus|>
音乐类型:R&B
输入文本 A soothing blend of instrumental and R&B rhythms, featuring serene and calming melodies.(器乐和R&B节奏的舒缓融合,以宁静和平和的旋律为特色。)
3、支持输入一段参考音频,延续生成音乐
上传一段参考音频,点击 启动音乐延续生成 按钮,即可生成和参考音频相同风格的延续音乐
下载地址:
夸克网盘:https://pan.quark.cn/s/355ec79f13a7
百度网盘:**** 本内容需购买 ****
解压密码:https://deepface.cc/ 复制这个完整的网址即是解压密码,不要有空格,复制粘贴即可
这个生成音乐的试试先,感谢楼主分享啦 Traceback (most recent call last):
File "<frozen __main__>", line 3, in <module>
File "<frozen app>", line 23, in <module>
File "D:\soft\InspireMusic\deepface\lib\site-packages\torch\__init__.py", line 148, in <module>
raise err
OSError: 找不到指定的模块。 Error loading "D:\soft\InspireMusic\deepface\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
请按任意键继续. . .
llongg 发表于 2025-2-26 08:46
Traceback (most recent call last):
File "", line 3, in
File "", line 23, in
有问题先看新人必看 无言以对 发表于 2025-2-26 08:51
有问题先看新人必看
已解决,谢谢! 我这生成失败,The list of tensors is empty for UUID: 1ae2c3b5-65e1-11f0-b302-00055d85be02
Traceback (most recent call last):
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\gradio\queueing.py", line 521, in process_events
response = await route_utils.call_process_api(
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\gradio\blocks.py", line 1945, in process_api
result = await self.call_function(
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\gradio\blocks.py", line 1513, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\anyio\_backends\_asyncio.py", line 962, in run
result = context.run(func, *args)
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\gradio\utils.py", line 831, in wrapper
response = f(*args, **kwargs)
File "<frozen app>", line 143, in demo_inspiremusic_t2m
File "<frozen app>", line 120, in music_generation
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\inspireMusi\InspireMusic\inspiremusic\cli\inference.py", line 151, in inference
for model_output in self.model.cli_inference(**model_input):
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\torch\utils\_contextlib.py", line 36, in generator_context
response = gen.send(None)
File "D:\inspireMusi\InspireMusic\inspiremusic\cli\inspiremusic.py", line 111, in cli_inference
for model_output in self.model.inference(**model_input, duration_to_gen=duration_to_gen, task=task):
File "D:\inspireMusi\InspireMusic\deepface\lib\site-packages\torch\utils\_contextlib.py", line 36, in generator_context
response = gen.send(None)
File "D:\inspireMusi\InspireMusic\inspiremusic\cli\model.py", line 266, in inference
logging.info(f"LLM generated audio token length: {this_music_token.shape}")
UnboundLocalError: local variable 'this_music_token' referenced before assignment
页:
[1]