i dislike every step of the canned dialogue tree that begins:
"ai is being used for x thing, skirting existing norms of responsibility, and this is bad"
"this is good actually, this could lead to the reduction in having to do x"
idk it's just always too narrow and stupid. what kind of institutional structure could produce x as a reasonable demanded activity and why do we have any reason to believe that ai performance of x would alleviate those pressures, even if adequately performed (big if in the short term)