FONOLOGIA MÚSICA
─────────────────────────────────────────────────
[+sonoro] / [-sonoro] → Voiced/Voiceless synth
[+nasal] / [-nasal] → Resonância (reverb)
[+anterior] → Frequências altas
[+coronal] → Articulação (attack)
[+contínuo] → Sustain longo
[+estridente] → Distorção/brightness
Ponto de Articulação → Registro (grave/agudo)
Modo de Articulação → Envelope ADSR
Vozeamento → Waveform type
Tom alto → C5 (nota aguda)
Tom médio → G4 (quinta)
Tom baixo → C4 (fundamental)
Análise: Sistema tonal = progressão harmônica
Guarani (nasalidade contrastiva)
Oral: sine wave (limpo)
Nasal: triangle + resonance (nasalizado)
Mapeamento: Traço fonológico → Timbre
CV structure → Musical rhythm
CV: quarter note (♩)
CVC: quarter + 8th (♩♪)
V: 8th note (♪)
CCV: triplet (♪♪♪)
Mora count → Duration
1 mora: 8th note
2 moras: quarter note
3 moras: dotted quarter
ipa_to_music.py
IPA_TO_PITCH = {
# Vogais (por altura)
'i': 72, # C5 (agudo)
'e': 69, # A4
'a': 60, # C4 (central)
'o': 55, # G3
'u': 48, # C3 (grave)
# Consoantes (por ponto de articulação)
'p': 36, # Bilabial (grave)
't': 48, # Alveolar (médio)
'k': 60, # Velar (agudo)
# Fricativas (com distorção)
's': (60, 0.3), # + distortion 30%
'ʃ': (65, 0.5), # + distortion 50%
'f': (55, 0.2), # + distortion 20%
}
def phoneme_to_note(phoneme, duration=0.25):
"""Convert IPA phoneme to musical note"""
if phoneme in IPA_TO_PITCH:
pitch = IPA_TO_PITCH[phoneme]
# Add phonological features
if is_voiced(phoneme):
velocity = 100 # Forte
else:
velocity = 60 # Piano
return {
'pitch': pitch,
'velocity': velocity,
'duration': duration
}
def word_to_melody(word_ipa):
"""
Convert word in IPA to melody
Example: /ti.ku.na/ → [Ti-ku-na people]
"""
notes = []
for i, phoneme in enumerate(word_ipa):
note = phoneme_to_note(phoneme)
note['time'] = i * 0.25 # 250ms per phoneme
notes.append(note)
return notes
Example: Tikuna
tikuna = ['t', 'i', 'k', 'u', 'n', 'a']
melody = word_to_melody(tikuna)
Output: [48, 72, 60, 48, 52, 60] MIDI notes
music_generator.py - Extension
class BubbleMusicGenerator:
def __init__(self, phonology_system='universal'):
self.base_note = 60
self.phonology = phonology_system
def apply_phonology(self, notes, language='tikuna'):
"""Apply phonological constraints to melody"""
if language == 'tikuna':
# 3-tone system
allowed_notes = [48, 60, 72] # C3, C4, C5
notes = [nearest_note(n, allowed_notes) for n in notes]
elif language == 'guarani':
# Add nasality (resonance)
for note in notes:
note['resonance'] = 0.7 # Nasal quality
return notes
analysis = {
"asset": "ChatGPT",
"bubble_index": 0.324,
"language_influence": "tikuna" # NEW
}
Generate with phonological constraints
generator = BubbleMusicGenerator(phonology_system='tikuna')
midi = generator.generate_from_analysis(analysis)
Result: ChatGPT metrics + Tikuna tonal system
3-tone restricted melody, bubble-informed rhythm
TUPI_SCALE = [0, 2, 4, 7, 9] # C D E G A (pentatônica)
def apply_tupi_scale(melody):
"""Restrict to pentatonic (common in indigenous music)"""
return [quantize_to_scale(note, TUPI_SCALE) for note in melody]
MACRO_JE_RHYTHM = {
'asymmetric': True,
'meters': [5, 7, 11], # Irregular meters
'polyrhythm': True
}
def apply_macro_je_rhythm(notes):
"""Complex rhythmic patterns"""
# 5/4 + 7/8 combinations
# Polyrhythmic layers
pass
// sonification.html - Extension
function createVowelSynth(vowel) {
// Formant synthesis (Fant, 1960)
const formants = {
'i': [270, 2300, 3000], // F1, F2, F3
'e': [530, 1840, 2480],
'a': [660, 1720, 2410],
'o': [570, 840, 2410],
'u': [300, 870, 2240]
};
const synth = new Tone.Synth({
oscillator: {
type: 'sawtooth',
// Add formant filters
}
});
// F1, F2, F3 filters
const f1 = new Tone.Filter(formants[vowel][0], 'bandpass');
const f2 = new Tone.Filter(formants[vowel][1], 'bandpass');
const f3 = new Tone.Filter(formants[vowel][2], 'bandpass');
synth.connect(f1);
synth.connect(f2);
synth.connect(f3);
return synth;
}
// Usage in bubble sonification
if (metrics.adoption > 0.7) {
synth = createVowelSynth('i'); // High vowel = high adoption
} else {
synth = createVowelSynth('a'); // Central vowel
}
class MusicalGrammar:
"""
Apply generative transformations
Similar to phonological rules
"""
def __init__(self):
self.rules = []
def add_rule(self, context, transformation):
"""
Example: /t/ → [tʰ] / ___ V
Musical: C4 → C5 / ___ (high adoption)
"""
self.rules.append((context, transformation))
def apply(self, melody, metrics):
for context, transform in self.rules:
if context(metrics):
melody = transform(melody)
return melody
Example rule
grammar = MusicalGrammar()
Rule: Raise pitch when adoption > 70%
grammar.add_rule(
context=lambda m: m['adoption'] > 0.7,
transformation=lambda notes: [n + 12 for n in notes] # +1 octave
)
Rule: Add dissonance when divergence > 50%
grammar.add_rule(
context=lambda m: m['divergence'] > 0.5,
transformation=add_tritone_intervals
)
class PhonologicalSonification:
"""
Research tool: Map any phonological system to music
"""
def __init__(self, inventory):
self.phoneme_inventory = inventory
self.feature_matrix = self.build_features()
def build_features(self):
"""
Binary feature matrix
[+voice, -nasal, +anterior, ...]
"""
matrix = {}
for phoneme in self.phoneme_inventory:
matrix[phoneme] = extract_features(phoneme)
return matrix
def sonify_contrast(self, phoneme_a, phoneme_b):
"""
Sonify minimal pair
/ta/ vs /da/ → pitch difference + voicing
"""
features_a = self.feature_matrix[phoneme_a]
features_b = self.feature_matrix[phoneme_b]
differences = feature_diff(features_a, features_b)
# Map differences to musical parameters
melody = []
for feature in differences:
if feature == 'voice':
# Voiced = fuller sound
melody.append({'velocity': 100})
elif feature == 'nasal':
# Nasal = resonance
melody.append({'reverb': 0.8})
return melody
Hypothesis: Documentation language affects adoption
assets_by_language = {
'english': ['chatgpt', 'langchain'],
'multilingual': ['huggingface', 'pytorch'],
'portuguese': ['brazilian-llm'],
}
results = {}
for lang, assets in assets_by_language.items():
for asset in assets:
analysis = analyze(asset)
# Generate music with linguistic influence
music = generate_music(
analysis,
linguistic_system=lang
)
results[asset] = {
'bubble': analysis.bubble_index,
'language': lang,
'music_file': music.file
}
Compare: English vs Multilingual projects
Musical difference reflects market penetration?
def export_with_metadata(midi_file, analysis, linguistic_data):
"""
Embed linguistic annotations in MIDI
"""
mid = MidiFile(midi_file)
# Add custom metadata
meta_track = mid.tracks[0]
# Language info
meta_track.append(MetaMessage(
'text',
text=f"Language: {linguistic_data['language']}"
))
# Phoneme sequence
meta_track.append(MetaMessage(
'text',
text=f"Phonemes: {linguistic_data['phonemes']}"
))
# Bubble metrics
meta_track.append(MetaMessage(
'text',
text=f"Bubble: {analysis.bubble_index}"
))
mid.save(f"{midi_file}_annotated.mid")
Objetivo: Mapear sistemas fonológicos para escalas musicais
Metodologia:
1. Coletar inventários fonológicos (Tikuna, Guarani, Kaingang)
2. Extrair traços distintivos
3. Mapear traços → parâmetros musicais
4. Gerar composições algorítmicas
5. Análise perceptual (ouvintes distinguem línguas?)
Output:
- Dataset de melodias por língua
- Sistema generativo fonologia→música
- Artigo interdisciplinar (Linguística + Computação + Música)
STRATAGO scenario planning + sonification
scenario = {
'technology': 'AGI',
'adoption': [0.1, 0.3, 0.6, 0.8], # Timeline
'language_community': 'guarani', # NEW
'cultural_context': 'indigenous' # NEW
}
Generate music that evolves over time
With phonological constraints
Representing different futures
for t, adoption in enumerate(scenario['adoption']):
metrics = calculate_at_time(t)
music = generate_music(
metrics,
linguistic_system=scenario['language_community']
)
# Result: 4 MIDI files showing temporal evolution
# With Guarani phonological structure
1. Corpus Linguístico → Musical - Textos em línguas indígenas - Conversão fonética → MIDI - Análise estatística de padrões 2. Machine Learning - Treinar modelo fonemas → notas - Generative model: novo "língua musical" 3. Interface Interativa - Input: texto IPA - Output: música + visualização 4. Paper Interdisciplinar - Linguística Computacional - Music Information Retrieval - Data Sonification ---
Conexão única: Seu background em fonologia + música + tecnologia = sistema único de sonificação com fundamentação linguística!