How to make an 'api' in python.
robowolf (285)

When requesting information from different websites it's very useful to be provided with a python wrapper for their API. But some sites don't have them. In this tutorial I will walk you through how to make an api for your website. We will be using my Repl Api as an example.

Packages

To start you will need to import requests and beautiful soup

import requests
from bs4 import BeautifulSoup

Setting Up the format

What you're going to want to do is set up a class. Of course you could use functions but when using classes you get methods (.blah()) which look way cooler. To do this simply define a class

class Api:
    def __init__(self):
         pass

replit = Api() # for this project I'm using replit but you can name it whatever you want

Making Methods/Getting Information

Now for the fun part. What you will want to do is come up for an idea. I'm going to use user cycles for this part. First set up a new method in your class.

class Api:
    def __init__(self):
         pass
    def user_cycles(self,user):
         pass

replit = Api()

Next we use beautiful soup and requests to scrape (you will need to inspect the page to find the HTML tags you want to get

class Api:
    def __init__(self):
         pass
    def user_cycles(self,user):
        
       
        cycle_count = requests.get("https://repl.it/@"+user)
        supper = BeautifulSoup(cycle_count.content,'html.parser')
        httm = supper.find('span',{"title":"cycles"})
        
       
        value =httm.get_text()
        return value

replit = Api()

(You might notice this is different then what I did in the original. In the original I hard coded out the tags. This comes in useful for comments but for cycles we can just use the .get_text() method.)
Our code should return (220). Now we just add this code before the return.

value = value.replace('(','')
value = value.replace(')','')

The final product should look like

import requests
from bs4 import BeautifulSoup
class Api:
    def __init__(self):
         pass
    def user_cycles(self,user):
        
       
        cycle_count = requests.get("https://repl.it/@"+user)
        supper = BeautifulSoup(cycle_count.content,'html.parser')
        httm = supper.find('span',{"title":"cycles"})
        
       
        value =httm.get_text()
        value = value.replace('(','')
        value = value.replace(')','')
        return value

replit = Api()

How to use your API

First copy paste the code into a file named whatever you want.py (I like to use api.py). Then in main.py do

from api import *

Next print or do whatever with your method

replit.user_cycles('robowolf')

Conclusion

Hopefully this guides you into the right direction and taught you some webscraping. Post your questions down below. Enjoy - Robowolf

You are viewing a single comment. View All
JBYT27 (1239)

Nice tutorial! The repl doesn't work though :(