I have oft bridged between all levels of very large construction projects - but those were in the billions of $budgets... I think Alignment is going to be one of the hardest areas to solve - and while I wont be a hero - I really want to be involved in Alignment Policy (anything) in any capacity
How do?
But what is the current sentiment on AI alignment? My mind is swimming with too many fragmented conversations going on to create a concise, cohesive, accurate representation of the current state.
Is there a canon source for where we are in Alignment? (And it seems to me that alignment is synonymous with AGI (meaning that AGI and alignment are the same thing - but one is approached from a technical standpoint [agi] - and from a policy it is [Alignment]
--
Discuss?
Moreover, AI alignment falls into the more general category of AI understanding and AI constraints. Topics like "AI Injection Attacks" are somewhat relevant to alignment; preventing an AI from revealing a secret code or exfiltrating emails (https://twitter.com/wunderwuzzi23/status/1659411665853779971) are relevant to prevent it from swearing or trying to hurt people. Constraining AI responses so they can be trusted and used by other algorithms is very important, see GPT's "system prompt", Microsoft Guidance (https://github.com/microsoft/guidance) and other tools and discussions.
But you probably won't hear much about AI alignment itself, except from people who think its unnecessary.
The hammer is aligned with a functional requirement to be a lever swung with force against a surface. If doesn't make it synonymous with a platonic ideal lever moving the world. More to the point, the lever to move the world is a questionable thesis, just as AGI is.