top of page

Search Results

235 results found with an empty search

  • Five Types of Research Partnerships in Education | Rutgers CESP

    Back to Research The Promise of Partnerships: Researchers Join Forces with Educators to Solve Problems of Practice Gail R. Meister & Cynthia L. Blitz In an article titled 'The Promise of Partnerships: Researchers Join Forces With Educators to Solve Problems of Practice' published as the feature article in The Learning Professional journal's June 2016 issue, authors Gail R. Meister and Cynthia L. Blitz explore the benefits of partnerships between educators and researchers and the potential these collaborations have to drive solutions to problems of practice. The paper explores five types of research partnerships, listed below, and their use cases, outcomes, and evaluation: Communities of practice Study councils Research alliances Design research Networked improvement communities Other topics include how professional learning leaders can maximize the quality and quantity of learning for the practitioners in research-practice partnerships and closing words on how to make the most of professional learning in these collaborations. June 2016 The Learning Professional (formerly JSD) | Volume 37, Issue 3 Use cases, outcomes, and evaluation of partnerships between educators and researchers are the focus of this article, which also shares how these collaborations can help solve problems of practice. Citation Meister, G. R., & Blitz, C. L. (2016). The Promise of Partnerships: Researchers Join Forces with Educators to Solve Problems of Practice. The Learning Professional , 37 (3), 46–52.. https://learningforward.org/journal/june-2016-issue/the-promise-of-partnerships/ View Online Download PDF Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link

  • Nedim Yel, Ph.D. | Rutgers CESP

    Back to Our People Nedim Yel, Ph.D. Senior Statistician/Researcher e : p : Education :

  • Prison Sexual Violence Rates by Gender & Perpetrator | Rutgers CESP

    Back to Research Sexual Violence Inside Prisons: Rates of Victimization Nancy Wolff, Cynthia L. Blitz, Jing Shi, Ronet Bachman & Jane A. Siegel Sexual violence in correctional settings represents a critical public health issue with far-reaching consequences that extend beyond prison walls into communities where formerly incarcerated individuals ultimately return. This groundbreaking research conducted by Nancy Wolff, Cynthia L. Blitz, and colleagues represents the first comprehensive study to examine sexual victimization rates across an entire state prison system, surveying 6,964 male and 564 female inmates across 13 facilities using state-of-the-art audio computer-assisted self-interviewing (CASI) methodology. The study was conducted as part of the Prison Rape Elimination Act (PREA) initiative, which mandated rigorous measurement of sexual victimization rates in American correctional facilities to inform evidence-based prevention and intervention strategies. The research reveals striking disparities in victimization rates across multiple dimensions, with female inmates experiencing significantly higher rates of inmate-on-inmate sexual victimization than their male counterparts. Six-month prevalence rates showed that 212 per 1,000 female inmates reported some form of sexual victimization by other inmates, compared to 43 per 1,000 male inmates. The study distinguished between two types of sexual violence: abusive sexual contacts (unwanted touching of intimate body parts) and nonconsensual sexual acts (forced penetration or oral sex), finding that abusive sexual contact was consistently more prevalent than nonconsensual sexual acts across all categories. Notably, staff-on-inmate sexual victimization rates were comparable between male and female facilities, with approximately 76 per 1,000 inmates in both populations reporting such incidents over a six-month period. The implications of these findings extend well beyond the prison environment, as the study emphasizes that sexual victimization increases risks for HIV transmission, psychological trauma, and future violence both within correctional facilities and in the community upon release. The research methodology employed multiple question formats and comprehensive sampling strategies to minimize common limitations found in previous studies, including small sample sizes, high non-response rates, and reliance on single facilities. The variation in victimization rates across facilities within the same prison system suggests that institutional factors play a crucial role in creating safer environments, pointing toward the possibility of implementing targeted interventions to reduce sexual violence. These findings provide essential baseline data for developing evidence-based policies and practices aimed at creating more humane and secure correctional environments while addressing the broader public health consequences of prison sexual violence. May 2006 Journal of Urban Health | Volume 83, Issue 5 DOI: 10.1007/s11524-006-9065-2 This comprehensive study examines sexual victimization rates within a state prison system, revealing significant variations by gender, perpetrator type, and facility characteristics. Citation Wolff, N., Blitz, C. L., Shi, J., Bachman, R., & Siegel, J. A. (2006). Sexual Violence Inside Prisons: Rates of Victimization. Journal of Urban Health , 83 (5), 835–848. https://doi.org/10.1007/s11524-006-9065-2 View Online Download PDF Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link

  • Decoding Program Evaluation in Research | Rutgers CESP

    Back to Research From Here to There: Beyond the Common Metrics Cynthia L. Blitz Invited speaker Cynthia L. Blitz, research professor and executive director of the Rutgers Center for Effective School Practices, presented 'From Here to There: Beyond the Common Metrics' in 2019 at the annual retreat of the New Jersey Alliance for Clinical and Translational Science (NJ ACTS). Topic areas Blitz presented include: Program evaluation: a critical scientific approach that transforms how researchers understand and improve complex interventions in clinical and translational research. Researchers approach program evaluation through multiple lenses, each offering unique insights into intervention effectiveness. Formative evaluation serves as an early warning system, assessing a program's feasibility before full-scale implementation. Process/implementation evaluation then tracks the program's journey, identifying potential barriers and ensuring the intervention remains true to its original design. Outcome/effectiveness evaluation takes a broader view, measuring the program's impact on knowledge and behavioral changes. Economic (cost-effectiveness) evaluation provides a practical perspective by comparing resource investments against achieved results. Impact evaluation assesses program effectiveness in achieving its ultimate goals. Logic models: sophisticated frameworks that map out the hypothesized relationships between program components. These models illuminate the pathway from initial inputs to desired outcomes, accounting for assumptions and external factors that might influence program success. By creating these visual representations, researchers can better understand the complex dynamics of their interventions. Frameworks like the Consolidated Framework for Implementation Research (CFIR) offer structured methodologies for assessing implementation success. By examining intervention characteristics, individual factors, organizational settings, and implementation processes, researchers can develop more targeted and effective programs. NIH Clinical and Translational Science Awards (CTSA) is an initiative that needs quality evaluation to show the program is well implemented, efficiently managed, and effective. Common metrics provide a starting point, but truly comprehensive evaluation requires a more nuanced approach. Program evaluation is a collaborative journey. It demands clear goals, systematic investigation, and a commitment to understanding the complex mechanisms that drive successful interventions in clinical research. October 2019 New Jersey Alliance for Clinical and Translational Science Inaugural Retreat New Brunswick, New Jersey, USA This presentation for NJ ACTS explores the scientific approaches that transform how researchers understand, assess, and improve complex interventions in clinical and translational studies. Citation Blitz, C. L. (2019, October 16). From Here to There: Beyond the Common Metrics [Invited presentation]. New Jersey Alliance for Clinical and Translational Science Annual Retreat, New Brunswick, New Jersey, USA. https://njacts.rbhs.rutgers.edu/event/inaugural-nj-acts-retreat/ View Online Download PDF Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link

  • 2022 Rutgers University Computer Science Summit | Rutgers CESP

    Hosted by Rutgers University and the CS4NJ Coalition with support from Google The Rutgers University Center for Computer Science Department alongside the CS4NJ Coalition and with support from Google, Inc. hosted the 2022 New Jersey Computer Science Summit on Scalability and Diversity. This was the 6th of the annual summits and offered over 15 sessions to stakeholders of computer science education focusing on pedagogical approaches, new student learning standards, and common problems of practice. Keynote presentations were made by Dr. Andrew Torres on facing challenges in CS education as a diverse learning community and by Michael Geraghty, the Chief Information Security Officer for New Jersey and the Director of the New Jersey Cybersecurity and Communications Integration Cell, on developing the next generation of cybersecurity professionals. Sessions focused on cybersecurity, diversity and equity in CS education, ethics in computing, dual enrollment and AP courses, and the integration of CS learning standards. The agenda, speaker information, session descriptions, and links to available slides are posted on the summit's webpage. View the Summit Webpage Suggested Citation: Center for Effective School Practices. (2022). 2022 Rutgers University Computer Science Summit [Event archive]. Rutgers University. New Brunswick, New Jersey, USA. https://cesp.rutgers.edu/eir-resource-library/2022-rutgers-cs-summit/ Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link ⟵ All Resources 2022 Rutgers University Computer Science Summit

  • Generative AI: Dos and Don'ts for the Computer Science Classroom | Rutgers CESP

    Artificial intelligence has made its way as a tool in schools. Whether it’s asking for clarifications on topics to generating images for projects, it’s influence on education has shaped the way students learn and engage with new ideas in the classroom. As students engage with a variety of AI platforms, including ChatGPT, Gemini and Copilot, the rise of AI usage in the classroom is prominent. However, even with new technological advancements, artificial intelligence still tends to miss the mark. From algorithmic bias to disinformation, AI can negatively impact learning in and outside of the classroom. The use of AI is evident and present in the classroom. But, as an educator, when and what could I use it for? Here are some ChatGPT Do’s and Don’ts. Use these guidelines to rethink and reshape the use of AI in the classroom in computer science education. Use AI to: Debug after trying first : Use AI to help identify errors, logic errors, or runtime errors after students have tested their code and attempted multiple fixes. Set firm guidelines : Clearly define how AI may be used during algorithm design, pseudocode writing, debugging, and testing—and when AI use is not permitted. Organize ideas : Use AI to help organize steps in an algorithm, outline pseudocode, or break a problem into smaller parts. Generate practice code : AI can generate code examples or partially completed programs for students to trace, test, and debug. Provide feedback : AI can support feedback on program logic, use of variables, conditionals, loops, and clarity of comments. Practice tracing code : Ask AI to create code where students track variable values, follow loops and conditionals, and predict outputs before running the program. Learn coding vocabulary : AI can explain CS vocabulary such as variables, conditionals, loops, algorithms, data types, and events in student-friendly language. Test programs : Use AI to suggest test cases or edge cases to help students check whether their programs work as intended. Reflect on learning : Ask AI questions like, “Why does this algorithm work?”, “What part of the program controls the flow?”, or “How could this solution be made more efficient or readable?” Support accessibility : AI can help rephrase instructions, explain errors, or provide alternative explanations for CS concepts. Do NOT use AI to: Find information without verification : AI may generate incorrect explanations of algorithms, misleading code, or examples that do not work. All code and concepts must be tested and verified. Answer coding challenges : AI should not generate solutions to programming tasks. Give students the chance to test code. Write the entire program : AI should not be used to produce full programs, final project code, or completed algorithms. Replace algorithm design : Students should not use AI to create algorithms, pseudocode, or flowcharts that they did not design themselves. Skip debugging and testing : Debugging, testing, and revising programs are essential parts of the CS learning process. Assume AI is always correct : AI-generated code may contain logic errors, inefficient solutions, or poor programming practices. Avoid learning core CS concepts : Students still need to learn how variables, loops, conditionals, events, and data work without relying on AI. Use AI to bypass collaboration : AI should not replace peer discussion, pair programming, or explaining ideas to others. It’s clear: Artificial intelligence can be a valuable support tool in middle school computer science when used intentionally and with clear expectations. Using these guidelines AI helps can be used to help students learn. It can debug, organize ideas, and reflect on learning without replacing critical thinking or problem-solving. When used responsibly, AI can enhance learning while reinforcing essential computer science skills. Suggested Citation: Center for Effective School Practices. (2026). Generative AI: Dos and Don'ts for the Computer Science Classroom [Practice guide]. Rutgers University. https://cesp.rutgers.edu/eir-resource-library/genai-dos-and-donts/ Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link ⟵ All Resources Generative AI: Dos and Don'ts for the Computer Science Classroom

  • Employment Barriers & Predictors for Incarcerated Women | Rutgers CESP

    Back to Research Predictors of Stable Employment Among Female Inmates in New Jersey Cynthia L. Blitz This study surveyed 908 female inmates at New Jersey's sole state correctional facility for women to identify factors that predict stable employment prior to incarceration. The research used two measures of employment stability: level of employment (full-time versus part-time work) and length of employment (ability to maintain jobs for one year or longer). Education emerged as the strongest predictor of stable employment. Women with high school education had nearly twice the likelihood of securing full-time employment compared to those without high school completion, while those with college education showed even greater employment stability. Behavioral health treatment also proved crucial, with women who received treatment for substance abuse or mental health disorders demonstrating significantly better employment outcomes than those who needed but did not receive such services. The study found that approximately half of the surveyed women were regularly employed prior to incarceration, with 63% working full-time positions. However, employment was predominantly in lower-prestige service occupations such as childcare, food service, and temporary positions. Factors commonly assumed to impact women's employment, such as having children under 18 or experiencing victimization, showed no significant association with employment stability. The findings underscore the importance of educational programming and behavioral health treatment in correctional facilities to improve post-release employment outcomes. The research recommends enhanced educational opportunities within prisons and expanded access to substance abuse and mental health treatment both during incarceration and upon community reentry. January 2006 Journal of Offender Rehabilitation | Volume 43, Issue 1 DOI: 10.1300/J076v43n01_01 This study examines factors that predict stable employment among female inmates in New Jersey, identifying education and behavioral health treatment as key determinants of employment success. Citation Blitz, C. L. (2006). Predictors of Stable Employment Among Female Inmates in New Jersey. Journal of Offender Rehabilitation , 43 (1), 1–22. https://doi.org/10.1300/J076v43n01_01 View Online Download PDF Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link

  • Fran P. Trees, D.P.S. | Rutgers CESP

    Back to Our People Fran P. Trees, D.P.S. Teaching Professor e : p : fran.trees@rutgers.edu (848) 445-7299 Education : Fran Trees is a Teaching Professor in the Computer Science Department at Rutgers. She also works closely with Rutgers CESP preparing teachers to successfully implement CS courses into their curricula. Her research interests are in CS Education, focusing on broadening participation and incorporating active learning in the CS classroom.

  • Rutgers CESP

    Back to Our People e : p : Education :

  • Understanding Algorithms with Board Games | Rutgers CESP

    Developed as part of the Extending the CS Pipeline: Enhancing Rigor and Relevance in Middle School CS Project. Algorithms are a set of steps taken to complete a task. They are building blocks that allow for computer innovations to complete tasks and solve problems. Algorithms are clear and specific so that at each step, a computer knows exactly what action it should take and what information it should use or change. They are efficient, clear, accurate, and effective at allowing us to complete tasks and reach desired outcomes. Students can better understand how they engage with various algorithms on a day-to-day basis, both through technology and daily interactions. By analyzing its structure, developing real world connections, and engaging critically, students can better understand how algorithms impact communication, day to day experiences, work, and problem-solving skills. In our Exploring Algorithms Lesson, students explore the concept of algorithms and critically engage with innovation, examine real-world examples, and create a flowchart showcasing the structure of algorithms through a game of their choice. This lesson provides a structured approach to helping students understand algorithms and analyze their interaction with them. This lesson package includes the following: Lesson Plans; with timing and student learning standards (NJSLS) Presentation slides! Algorithms & Games - Slides .pptx Download PPTX • 6.32MB Algorithms & Games - Lesson Plan & Overview .docx Download DOCX • 2.38MB Suggested Citation: Center for Effective School Practices. (2025). Understanding Algorithms with Board Games [Lesson Plan Package]. Rutgers University. https://cesp.rutgers.edu/eir-resource-library/algorithms-board-games/ Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link ⟵ All Resources Understanding Algorithms with Board Games

bottom of page