Skip to article frontmatterSkip to article content

Lesson 3. Wikipedia

The Ohio State University Libraries

This lesson introduces pandas.read_html, a useful tool for extracting tables from HTML, and continues to explore BeautifulSoup, a Python library designed for parsing XML and HTML documents. We will start by gathering artists found on the List of Rock and Roll Hall of Fame inductees webpage in Wikipedia. We will then assemble discographies for 2-3 of our favorite artists.

Data skills | concepts

  • Search parameters
  • HTML
  • Web scraping
  • Pandas

Learning objectives

  1. Extract and store tables and other HTML elements in a structured format
  2. Apply best practices for managing data

This tutorial is designed to support multi-session workshops offered by The Ohio State University Libraries Research Commons. It assumes you already have a basic understanding of Python, including how to iterate through lists and dictionaries to extract data using a for loop. To learn basic Python concepts visit the Python - Mastering the Basics tutorial.

LESSON 3

Lesson 1 and Lesson 2 introduced the basic steps for any webscraping or API project:

  1. Review and understand copyright and terms of use.
  2. Check to see if an API is available.
  3. Examine the URL
  4. Inspect the elements
  5. Identify Python libraries for project
  6. Write and test code

Pandas

.read_html()

Read HTML tables directly into DataFrames with .read_html() . This extremely useful tool extracts all tables present in specified URL or file, allowing each table to be accessed using standard list indexing and slicing syntax.

The following code instructs Python to go to the Wikipedia List of states and territories of the United States and retrieve the second table.

import pandas as pd

tables=pd.read_html('https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States')
tables[1]
Loading...

BeautifulSoup

.find_previous( ) and .find_all_previous( )

Similar to .find_next( ) and .find_all_next( ), .find_previous( ) and .find_all_previous( ) gathers the previous instance of a named tag.

Managing files

There are several best practices and considerations for effectively managing research data files. When extracting and locally storing data in individual files, using standardized file naming conventions not only helps you organize and utilize your files efficiently but also facilitates sharing and collaboration with others in future projects.

  • Use short, descriptive names.
  • Use _ underscores or - dashes instead of spaces in your file names. Use leading zeros for sequential numbers to ensure proper sorting.

file_001_20250506.txt
file_002_20250506.txt

  • Use all lowercase for directory and filenames if possible.
  • Avoid special characters, including ~!@#$%^&*()[]{}?:;<>|\/
  • Use standardized dates YYYYMMDD to track versions and updates.
  • Include version control numbers to keep track of projects.

os module

The os module tells Python where to find and save files.

os.mkdir(‘path’)

Creates a new directory in your project folder or another specified location. Makes a directory in your project folder or another folder you specify. If a directory by the same name already exists in the path specified, os.mkdir will raise an OSError. Use a try-except block to handle the error.

import os

try:
    os.mkdir(artist)
except FileExistsError:
    print(f"Directory '{artist}' already exists.")
except Exception as e:
    print(f"An error occurred: {e}")