AI systems are being used in many facets of public life, in areas as diverse as policing, healthcare, and housing. However, many of these systems are developed largely in isolation of the communities they are meant to serve. In the best case, this may lead to applications that are improperly specified or scoped, and are thereby ineffective; in the worst case, it can lead (and has led) to harmful, biased outcomes for marginalized populations. In response, a growing set of voices has called for meaningful community engagement in the design of public-facing AI research (i.e., AI research likely to impact the public). However, despite emerging HCI methods for engaging stakeholders throughout the AI design process, members of impacted communities are too often asked for feedback only after deployment. We believe this disconnect occurs in large part because academic AI researchers lack organizational incentives to actually use existing community engagement methods, as has been seen with industry AI practitioners adopting AI fairness methods. In light of this, we call for universities to develop and implement requirements for community engagement in AI research. These requirements should ensure that AI researchers designing public-facing systems make the needs and interests of impacted communities a fundamental part of their work - and, crucially, engage community members throughout the design and deployment of this work. We propose that universities create these requirements so that (1) university-based AI researchers will be incentivized to incorporate meaningful community engagement throughout the research life cycle, (2) the resulting research is more effective at serving the needs and interests of impacted communities, not simply the stakeholders with greater influence, and (3) the AI field values the process and challenge of community engagement as an important contribution in its own right.