Scraping Pro Football Reference with Python (2024)

Written on August 13th, 2019 by Steven Morse

Pro Football Reference is a stat-head’s dream — there is a wealth of football information, it is easily accessible directly on the site through built-in APIs, and it is cleanly formatted which makes data scraping a non-headache-inducing endeavor.

This post outlines how to grab historical fantasy points for individual players using Python. (Here’s an older, shorter post using R.) We’ll be able to do plots like this, which groups similar players based on their point average and variance.

Scraping Pro Football Reference with Python (1)

Scraping a single table

The hard way to scrape a table is manually grabbing the raw HTML with requests, then manually parsing the table structure with BeautifulSoup. But if we don’t need much fine control, there’s a better way.

The easy way to scrape a table is using pandas.read_html. Here’s what it looks like to scrape Russell Wilson’s fantasy data from 2018 with 3 lines of code:

import pandasurl = 'https://www.pro-football-reference.com/players/W/WilsRu00/fantasy/2018/'df = pd.read_html(url)[0]
 Unnamed: 0_level_0 Inside 20 Inside 10 Snap Counts Unnamed: 4_level_0 Unnamed: 0_level_1 Passing Rushing Passing Rushing ... Rk G# Date Tm Unnamed: 4_level_2 ...0 1.0 1.0 2018-09-09 SEA @ ...1 2.0 2.0 2018-09-17 SEA @ ...

That’s it folks! read_html returns a list of all the <table>’s on the page, in pandas.DataFrame form, and since we peeked at the page we know to just grab the first one and we’re off and running!

Obviously we have some seriously ugly column headers to deal with, but we can get to that later.

Scraping multiple pages

If we want to automate scraping multiple pages, though, we need to get a little more detailed. Now we’ll bring in the aforementioned requests, which is a base Python package that allows us to do lower-level HTML/XML requests, and BeautifulSoup, which is a package for manipulating and crawling raw HTML/XML.

(By the way, scraping webpages is an important tool in ethical (white-hat) hacking — here is a nice set of online courses in this skill that also introduce basic Python skills along the way.)

Notice that PFR’s naming scheme for players is a little odd. Russell Wilson’s stub is W/WilsRu00 but Antonio Brown’s is B/BrowAn04. How are we going to figure out what all these are and then loop through each page?

Check out the Fantasy Leaders page. This is a list of every active player in the season along with their season fantasy stats, ordered by overall fantasy points. Notice that each player in this table is hyperlinked to their individual page. Wouldn’t it be nice if we could crawl through this table and record each player’s URL stub? We can’t do this with our pandas.read_html trick anymore, because that strips the hyperlinks to the caption text. (That is, of <a href="url">Player</a> it only grabs Player.)

But we can do it: let’s use requests to grab the raw HTML, then BeautifulSoup to crawl through the Name column of the table, recording things like the hyperlink as we go.

Here’s how to grab the first table as a BeautifulSoup object:

from bs4 import BeautifulSoupurl = 'https://www.pro-football-reference.com'year = 2018r = requests.get(url + '/years/' + str(year) + '/fantasy.htm')soup = BeautifulSoup(r.content, 'html.parser')parsed_table = soup.find_all('table')[0] 

The r = requests.get(...) line grabs the webpage, and we give the raw HTML r.content to BeautifulSoup and make the object soup. We can then do things like find_all instances of the 'table' tag, and grab the first one with [0].

Now that we have the raw HTML of the table stored in parsed_table, here’s how to loop through the rows of the table, grabbing the player name entry (which you may notice from trawling the page source is conveniently labeled with the attribute data-stat="player") and extracting the parts of the <a> tag that we want:

# first 2 rows are col headers so skip them with [2:]for i,row in enumerate(parsed_table.find_all('tr')[2:]): dat = row.find('td', attrs={'data-stat': 'player'}) name = dat.a.get_text() stub = dat.a.get('href') # NEXT: use `stub` to access the player page

This stub contains the nugget of information we came for. We’re now ready to combine our code so far into one enormous, glorious loop:

url = 'https://www.pro-football-reference.com'year = 2018maxp = 300 # grab fantasy playersr = requests.get(url + '/years/' + str(year) + '/fantasy.htm')soup = BeautifulSoup(r.content, 'html.parser')parsed_table = soup.find_all('table')[0] df = []# first 2 rows are col headersfor i,row in enumerate(parsed_table.find_all('tr')[2:]): if i % 10 == 0: print(i, end=' ') if i >= maxp: print('\nComplete.') break try: dat = row.find('td', attrs={'data-stat': 'player'}) name = dat.a.get_text() stub = dat.a.get('href') stub = stub[:-4] + '/fantasy/' + str(year) pos = row.find('td', attrs={'data-stat': 'fantasy_pos'}).get_text() # grab this players stats tdf = pd.read_html(url + stub)[0] # get rid of MultiIndex, just keep last row tdf.columns = tdf.columns.get_level_values(-1) # fix the away/home column tdf = tdf.rename(columns={'Unnamed: 4_level_2': 'Away'}) tdf['Away'] = [1 if r=='@' else 0 for r in tdf['Away']] # drop all intermediate stats tdf = tdf.iloc[:,[1,2,3,4,5,-3]] # drop "Total" row tdf = tdf.query('Date != "Total"') # add other info tdf['Name'] = name tdf['Position'] = pos tdf['Season'] = year df.append(tdf) except: pass df = pd.concat(df)df.head()
G# Date Tm Away Opp FantPt Name Position Season0 1.0 2018-09-10 LAR 1 OAK 20.7 Todd Gurley RB 20181 2.0 2018-09-16 LAR 0 ARI 29.3 Todd Gurley RB 20182 3.0 2018-09-23 LAR 0 LAC 19.6 Todd Gurley RB 20183 4.0 2018-09-27 LAR 0 MIN 21.6 Todd Gurley RB 20184 5.0 2018-10-07 LAR 1 SEA 29.3 Todd Gurley RB 2018

and since this will probably take a few minutes (depending on your maxp setting and your internet connection), I recommending saving the df to a CSV:

df.to_csv('fantasy2018.csv')

Plotting average vs. variance

So now you can easily grab one or more players’ fantasy point performance trajectories over the season and plot them if you so desire:

(df.query('Name == "Saquon Barkley"') .plot('Date', 'FantPt'))

Scraping Pro Football Reference with Python (2)

To get a feel for more of the data, let’s look at some summary statistics. My first thought is to reduce the trajectory of each player’s fantasy performance to a mean and a variance. The ideal player has a high mean/average point total, and doesn’t deviate too far away from it. A boom-or-bust player has, perhaps, a moderate to high mean but an extremely high variance.

Here’s the (brute force) code:

fig, ax = plt.subplots(1,4, sharey=True, figsize=(15,4))cols = ['k', 'c', 'g', 'b']for i, pos in enumerate(["QB", "RB", "WR", "TE"]): t = (df .query('Position == @pos') .groupby('Name') .agg({'FantPt': ['mean', 'std'], 'Position': 'min'})) ax[i].scatter(t[('FantPt','mean')], t[('FantPt', 'std')], c=cols[i], s=50, alpha=0.5, label=pos) ax[i].set(xlabel='Avg Pts', title=pos)# label some outliersax[0].annotate('P. Mahomes', xy=(26, 6.76), xytext=(16, 2), arrowprops={'facecolor':'black', 'width':0.1, 'shrink':0.08})ax[1].annotate('T. Gurley', xy=(22.36, 8.1), xytext=(15, 2), arrowprops={'facecolor':'black', 'width':0.1, 'shrink':0.08})ax[2].annotate('D. Adams', xy=(14.57, 4.2), xytext=(9, 2), arrowprops={'facecolor':'black', 'width':0.1, 'shrink':0.1})ax[3].annotate('T. Kelce', xy=(11.97, 7.5), xytext=(9, 2), arrowprops={'facecolor':'black', 'width':0.1, 'shrink':0.1}) ax[0].set(ylim=[1,13])plt.tight_layout()plt.show()

And the plot is the one at the beginning of the blog.

What’s next

It’s a bit silly to pull all this data just to compress it to two summary statistics, but this is a short post. It may be interesting to group similar players based on the dynamics of their entire time series, although this may not be very meaningful or predictive.

All that aside, hopefully this post gives you some basic tools to do simple web scraping in Python.

Written on August 13th, 2019 by Steven Morse

  • A Simple Handmade Jupyter Notebook to Markdown converter

  • Conway's Game of Life - Javascript

  • 2 - Peeking under the hood with a linear model

  • Measuring Roster Skill in Fantasy Football

  • Checking ESPN Fantasy Football Projections with Python

  • Fourier Transforms (with Python examples)

  • Bayesian Coin Flipping

Scraping Pro Football Reference with Python (2024)
Top Articles
Craigslist Mendocino Coast
67-72 Chevy Truck Parts Craigslist
Funny Roblox Id Codes 2023
Calvert Er Wait Time
Section 4Rs Dodger Stadium
jazmen00 x & jazmen00 mega| Discover
Chris Provost Daughter Addie
Frank Lloyd Wright, born 150 years ago, still fascinates
T Mobile Rival Crossword Clue
Mawal Gameroom Download
2013 Chevy Cruze Coolant Hose Diagram
3656 Curlew St
Binghamton Ny Cars Craigslist
Cbs Trade Value Chart Fantasy Football
Mzinchaleft
Qhc Learning
Magic Seaweed Daytona
Www.paystubportal.com/7-11 Login
Ihub Fnma Message Board
Unity Webgl Car Tag
Movies - EPIC Theatres
Mercedes W204 Belt Diagram
2487872771
The Ultimate Guide to Obtaining Bark in Conan Exiles: Tips and Tricks for the Best Results
Xfinity Outage Map Lacey Wa
Teenbeautyfitness
Metra Union Pacific West Schedule
Help with your flower delivery - Don's Florist & Gift Inc.
Greater Keene Men's Softball
Mars Petcare 2037 American Italian Way Columbia Sc
Convenient Care Palmer Ma
The Holdovers Showtimes Near Regal Huebner Oaks
Craigslist Pa Altoona
Craigs List Palm Springs
Noaa Marine Weather Forecast By Zone
Lonely Wife Dating Club בקורות וחוות דעת משתמשים 2021
Actor and beloved baritone James Earl Jones dies at 93
Arcane Bloodline Pathfinder
Doe Infohub
Hk Jockey Club Result
Tlc Africa Deaths 2021
Sandra Sancc
Mountainstar Mychart Login
Oefenpakket & Hoorcolleges Diagnostiek | WorldSupporter
Colin Donnell Lpsg
Shiftselect Carolinas
antelope valley for sale "lancaster ca" - craigslist
Wrentham Outlets Hours Sunday
Sam's Club Fountain Valley Gas Prices
Sml Wikia
M Life Insider
Cheryl Mchenry Retirement
Latest Posts
Article information

Author: Ms. Lucile Johns

Last Updated:

Views: 5652

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.