Hubbub! Blog

Latest Posts

Online Training with STIPEND SUPPORT Opportunity: ‘Big Data + High-Performance Computing + Atmospheric Sciences’

Call for Participants: NSF fundedMultidisciplinary Online Training Program with Stipend Support in Spring 2019on Big Data + High-Performance Computing + Atmospheric Sciences

   Funded as an NSF grant to train graduate students, post-docs, and junior faculty on “Big Data + High-Performance Computing + Atmospheric Sciences”, our training program is a new NSF-funded initiative in big data applied to atmospheric sciences and using high-performance computing as a vital tool. The training consists of instruction in the areas of data, computing, and atmospheric sciences supported by teaching assistants, followed by faculty-guided project research in a multidisciplinary team of participants from each area. Participants around the nation will be exposed to multidisciplinary research experiences and have the opportunity for significant career growth.

   Extending on top of the face-to-face training we had in 2018, we will offer an online course in Spring 2019 (Weekly from 01/28/2019 to 05/17/2019). We welcome graduate students,post-docs, and early-career faculty/researchers from US institutes to apply. Each person who finishes the program will receive $1,500 stipend. Application deadline is 01/01/2019. The flyer of the program is at http://cybertraining.umbc.edu/docs/UMBC_CyberTraining_Spring_2019.pdf and more information can be found at http://cybertraining.umbc.edu/. Please contact us at cybertraining@umbc.edu for questions and inquiries.

  This training is funded by the NSF under OAC-1730250 “CyberTraining: DSE: Cross-Training of Researchers in Computing, Applied Mathematics and Atmospheric Sciences using Advanced Cyberinfrastructure Resources” under the solicitation Training-based Workforce Development for Advanced Cyberinfrastructure (CyberTraining).

The DataUp Workshop – Instructor Training: Inspiring Professional Development & Capacity-Building

Faculty teams from the DataUp program during the Instructor Training Workshop on Nov 6 & 7, 2018.

Society is increasingly becoming more data-driven and data-literate. It is vital every institution has the capabilities and infrastructure to engage and develop learners prepared to interact and succeed in such a society. Numerous studies have identified the expanding data divide between institution types and the need to develop successful bridge initiatives. The South Hub begin to address this need by creating a 3-part program, DataUp. Through this program, the South Hub is directly impacting each participating institution’s data science education capacities.

The first component of the program is a hosted 2-day data or software workshop presented by the Carpentries. This provided an opportunity for each participating institution to engage in a workshop that specifically addressed their data knowledge gaps (for more information on these workshops, Click Here). Exposing students to these intensive workshops, students are able to gain hands-on training and exposure to principles and tools, such as shell and JupyterHub. Removing the associated ‘fear factors’ empowers learners to employ and address challenges with data. The second component of the DataUp program is a 2-day pedagogy intensive instructor training.

On Nov 6 & 7, 2018, The DataUp program welcomed participating institutions for the instructional training. During this instructional training workshop, faculty teams engaged in a pedagogy intensive to learn best practices concepts for data science education.  Many instructors noted the timeliness of this training for not only their students but for faculty overall professional development. One faculty member noted the ‘workshop is great to teach techniques [necessary] to teach concepts like these [at my home institution]. In [a] purely doctoral program, they don’t teach pedagogy’. The workshop did not include analytical software training but discussed mindset cultivation, participatory live coding benefits, managing diverse classrooms, and more. The workshop also included multiple participatory experiences for faculty teams to practice and commit to memory techniques and best practices needed to actively engage learners and complex concepts.

Here are a few points from the workshop:

    • Most students/learners approach computational and analytical concepts with a fixed mindset.  Typically these mindsets are negative.  It’s not that they can’t learn the skills, they simply start thinking they are unable to learn the concepts and their actions begin to follow their mindset
    • Participatory live coding is great for demonstrating, reinforcing, and engaging all learning styles.  Don’t be afraid to make mistakes. Through imperfection, learners can watch and learn proper troubleshooting techniques.
    • Patience is key.  Don’t expect students to learn and understand at your pace.  Move at the speed of the class. The goal is competence, not speed.

The workshop also provided faculty members the opportunity to learn use case scenarios involving the interactive notebook, JupyterHub.   JupyterHub is an open web-application that allows for creating and sharing live code, equations, visualizations, narrative text.  Faculty from numerous institutions noted a challenge to teaching data and analytical concepts is the lack of institutional infrastructure to support these initiatives.  One faculty member stated, ‘we are a small institution and don’t have the large IT [department] to help set up [or troubleshoot]’.  Utilizing JupyterHub helps to alleviate this issue.  Typically, to teach a lesson, instructors would need to ensure each student has the correct software versions and updates installed on their computers.  If an issue arises, this takes valuable instructor time away from actually teaching the lessons, to troubleshoot any challenges.  Students may become less engaged in the lesson or believe the concepts and lesson to be cumbersome.  Either way, this does not encourage students or faculty to utilize analytical software.

For institutions with limited or no infrastructure, JupyterHub provides a great alternative that alleviates the challenges of setup, increases classroom instruction time, and enhances participatory learning.

The third programmatic component requires participants to utilize the pedagogical best practices, learned during the instructor training, to teach either a bootcamp, workshop, seminar, or 2-day training to their institution in 2019.  Check back for information for their 2019 self-directed workshops.

To view photos from the event, click here.

For more information regarding the on-campus workshops, please click here.  

DataUp Workshop – Old Dominion University: A Melting Pot of Learners and Perspectives Creates an Impactful Workshop

Learners, instructors, and ‘workshop helpers’ from Old Dominion University pose after a collaborative and engaging 2-day workshop with shell, git, R, and JupyterHub.  

The DataUp program visited Old Dominion University on Oct 25-26 to introduce shell, git, R, and the JupyterHub. The workshop included students, faculty and staff eager to engage with the analytical tools. This workshop brought together a ‘melting pot’ of faculty, staff, and students from various corners of the university to engage in a 2-day workshop. The concepts were chosen by ODU as Unix Shell & R are utilized in their High-Performance Computing Center. Even though intensive, one student noted, ‘it’s like drinking from an 8-hour firehouse, but the information is great. I knew nothing before Day 1 and [now I feel] more confident [after the first day]’. The 2-person instructor team led the traditional Carpentries curriculum, but their instruction was magnified by multiple ‘workshop helpers’. In training workshops such as these, learners have various expertise levels and learning styles. Multiple ‘workshop helpers’ and instructors reinforced the concepts as the diverse point of views benefited the wide range of learning styles. This was especially beneficial to a professor who noted that she learned to code about 15 years ago and was very nervous to retool herself but, ‘this workshop helped [her] to remove her fear [of coding]’.

Usually, sharing personal use cases help learners to identify how to implement analytical tools into their research or classroom activities. One instructor shared the benefit of an open-source collaborative tool, such as JupyterHub, was the ability to assist individuals in other disciplines across the world. For example, he developed a code for his GIS research and a ‘community member’ was able to utilize the code for their aquaculture project. Although they’ve never met in person, together they consistently update the code for efficiency and future utilization. As research interests become more global, the ability to build a community and collaborate despite locale peaked individuals interest in the collaborative tool. Themes such as collaboration and connectivity were consistently highlighted throughout the workshop. It is no doubt learners and ODU will continue to harness these themes and work to increase their capacity for data analysis.

View Photos from the Workshop.