Posts
Pinned
160
👋✨Introduce yourself 🎉😀
Hello everyone! Let's use this thread to get to know each other. Just say hi and a few words about who you are, maybe what are you building or learning with Repl.it. If you're not comfortable sharing anything then share something fun 😉
534
posted to Announcements by amasad (452) 3 months ago
Pinned
20
From Scratch: AI Balancing Act in 50 Lines of Python
![Cart Pole Balancing](https://media.giphy.com/media/QKFkgF2ZtTVbsZBeb2/giphy.gif) Hi everyone! Today I want to show how in 50 lines of Python, we can teach a machine to balance a pole! We’ll be using the standard OpenAI Gym as our testing environment, and be creating our agent with nothing but numpy. I'll also be going through a crash course on reinforcement learning, so don't worry if you don't have prior experience! The cart pole problem is where we have to push the cart left and right to balance a pole on top of it. It’s similar to balancing a pencil vertically on our finger tip, except in 1 dimension (quite challenging!) You can check out the final repl here: https://repl.it/@MikeShi42/CartPole. ## RL Crash Course If this is your first time in machine learning or reinforcement learning, I’ll cover some basics here so you’ll have grounding on the terms we’ll be using here :). If this isn’t your first time, you can go on and hop down to developing our policy! **Reinforcement Learning** Reinforcement learning (RL) is the field of study delving in teaching agents (our algorithm/machine) to perform certain tasks/actions without explicitly telling it how to do so. Think of it as a baby, moving it's legs in a random fashion; by luck if the baby stands upright, we hand it a candy/reward. Similarly the agent's goal will be to maximise the total reward over its lifetime, and we will decide the rewards which align with the tasks we want to accomplish. For the standing up example, a reward of 1 when standing upright and 0 otherwise. An example of an RL agent would be AlphaGo, where the agent has learned how to play the game of Go to maximize its reward (winning games). In this tutorial, we’ll be creating an agent that can solve the problem of balancing a pole on a cart, by pushing the cart left or right. **State** A state is what the game looks like at the moment. We typically deal with numerical representation of games. In the game of pong, it might be the vertical position of each paddle and the x, y coordinate of the ball. In the case of cart pole, our state is composed of 4 numbers: the position of the cart, the speed of the cart, the position of the pole (as an angle) and the angular velocity of the pole. These 4 numbers are given to us as an array (or vector). This is important; understanding the state is an array of numbers means we can do some mathematical operations on it to decide what action we want to take according to the state. **Policy** A policy is a function that can take the state of the game (ex. position of board pieces, or where the cart and pole are) and output the action the agent should take in the position (ex. move the knight, or push the cart to the left). After the agent takes the action we chose, the game will update with the next state, which we’ll feed into the policy again to make a decision. This continues on until the game ends in some way. The policy is very important and is what we’re looking for, as it is the decision making ability behind an agent. **Dot Products** A dot product between two arrays (vectors) is simply multiplying each element of the first array by the corresponding element of the second array, and summing all of it together. Say we wanted to find the dot product of array A and B, it’ll simply be A[0]*B[0] + A[1]*B[1]... We’ll be using this operation to multiply the state (which is an array) by another array (which will be our policy). We’ll see this in action in the next section. ## Developing our Policy To solve our game of cart pole, we’ll want to let our machine learn a strategy or a policy to win the game (or maximize our rewards). For the agent we’ll develop today, we’ll be representing our policy as an array of 4 numbers that represent how “important” each component of the state is (the cart position, pole position, etc.) and then we’ll dot product the policy array with the state to output a single number. Depending on if the number is positive or negative, we’ll push the cart left or right. If this sounds a bit abstract, let’s pick a concrete example and see what will happen. Let’s say the cart is centered in the game and stationary, and the pole is tilted to the right and is also falling towards the right. It’ll look something like this: ![game](https://i.imgur.com/KjSx230.png) And the associated state might look like this: ![state table](https://i.imgur.com/E6luv5U.png) The state array would then be [0, 0, 0.2, 0.05]. Now intuitively, we’ll want to straighten the pole back up by pushing the cart to the right. I’ve taken a good policy from one of my training runs and its policy array reads: [-0.116, 0.332, 0.207 0.352]. Let’s do the math real quick by hand and see what this policy will output as an action for this state. Here we’ll dot product the state array [0, 0, 0.2, 0.05] and the policy array (pasted above). If the number is positive, we push the cart to the right, if the number is negative, we push left. ![dot product between policy and state](https://i.imgur.com/OStbHOq.png) The result is positive, which means the policy also would’ve pushed the cart to the right in this situation, exactly how we’d want it to behave. Now this is all fine and dandy, and clearly all we need are 4 magic numbers like the one above to help solve this problem. Now, how do we get those numbers? What if we just totally picked them at random? How well would it work? Let’s find out and start digging into the code! ## Start Your Editor! Let’s pop open a Python instance on repl.it. Repl.it allows you to quickly bring up cloud instances of a ton of different programming environments, and edit code within a powerful cloud IDE that is accessible anywhere as you might know already! ![New Python Project on Repl.it](https://media.giphy.com/media/4T1KlYdYoabwmzSfIS/giphy.gif) ## Install the Packages We’ll start off by installing the two packages we need for this project: numpy to help with numerical calculations, and OpenAI Gym to serve as our simulator for our agent. ![Installing Gym Package on Repl.it](https://media.giphy.com/media/82OOeCknK40hPvkDQv/giphy.gif) Simply type `gym` and `numpy` into the package search tool on the left hand side of the editor and click the plus button to install the packages. ## Laying Down the Foundations Let’s first import the two dependencies we just installed into our main.py script and set up a new gym environment: ```python import gym import numpy as np env = gym.make('CartPole-v1') ``` Next we’ll define a function called “play”, that will be given an environment and a policy array, and will play the policy array in the environment and return the score, and a snapshot (observation) of the game at each timestep. We’ll use the score to tell us how well the policy played and the snapshots for us to watch how the policy did in a single game. This way we can test different policies and see how well they do in the game! Let’s start off with the function definition, and resetting the game to a starting state. ```python def play(env, policy): observation = env.reset() ``` Next we’ll initialize some variables to keep track if the game is over yet, the total score of the policy, and the snapshots (observations) of each step during the game. ```python done = False score = 0 observations = [] ``` Now we’ll simply just play the game for a lot of time steps, until the gym tells us the game is done. ```python for _ in range(5000): observations += [observation.tolist()] # Record the observations for normalization and replay if done: # If the simulation was over last iteration, exit loop break # Pick an action according to the policy matrix outcome = np.dot(policy, observation) action = 1 if outcome > 0 else 0 # Make the action, record reward observation, reward, done, info = env.step(action) score += reward return score, observations ``` The bulk of the code above is mainly just in playing the game and recording the outcome. The actual code that is our policy is simply these two lines: ```python outcome = np.dot(policy, observation) action = 1 if outcome > 0 else 0 ``` All we’re doing is doing the dot product operation between the policy array and the state (observation) array like we’ve shown in the concrete example earlier. Then we either choose an action of 1 or 0 (left or right) depending if the outcome is positive or negative. So far our main.py should look like this: [Github Gist](https://gist.github.com/MikeShi42/c6ea4f19bf628cc40dc9c76087f5d4fb) Now we’ll want to start playing some games and find our optimal policy! ## Playing the First Game Now that we have a function to play the game and tell how good our policy is, we’ll want to start generating some policies and see how well they do. What if we just tried to plug in some random policies at first? How far can we go? Let’s use numpy to generate our policy, which is a 4 element array or a 4x1 matrix. It’ll pick 4 numbers between 0 and 1 to use as our policy. ```python policy = np.random.rand(1,4) ``` With that policy in place, and the environment we created above, we can plug them into play and get a score. ```python score, observations = play(env, policy) print('Policy Score', score) ``` Simply hit run to run our script. It should output the score our policy got. ![policy score of 9.0](https://i.imgur.com/z7EUsWM.png) The max score for this game is 500, chances are is that your policy didn’t fare so well. If yours did, congrats! It must be your lucky day! Just seeing a number though isn’t very rewarding, it’d be great if we could visualize how our agent plays the game, and in the next step we’ll be setting that up! ## Watching our Agent To watch our agent, we’ll use [flask](http://flask.pocoo.org/) to set up a lightweight server so we can see our agent’s performance in our browser. Flask is a light Python HTTP server framework that can serve our HTML UI and data. I’ll keep this part brief as the details behind rendering and HTTP servers isn’t critical to training our agent. We’ll first want to install ‘flask’ as a Python package, just like how we installed gym and numpy in the previous sections. ![Installing Flask Gif](https://media.giphy.com/media/67s8AmPt8GbZdDlhCD/giphy.gif) Next, at the bottom of our script, we’ll create a flask server. It’ll expose the recording of each frame of the game on the `/data` endpoint and host the UI on `/`. ```python from flask import Flask import json app = Flask(__name__, static_folder='.') @app.route("/data") def data(): return json.dumps(observations) @app.route('/') def root(): return app.send_static_file('./index.html') app.run(host='0.0.0.0', port='3000') ``` Additionally we’ll need to add two files. One will be a blank Python file to the project. This is a technicality of how repl.it detects if the repl is either in [eval mode or project mode](https://repl.it/site/docs/repls/files). Simply use the new file button to add a blank Python script. After that we also want to create an index.html that will host the rendering UI. I won’t dive into details here, but simply **upload this [index.html](https://gist.github.com/MikeShi42/7b5ff55e2320e41228b5c25ad1113321) to your repl.it project**. You now should have a project directory that looks like this: ![project directory screenshot](https://i.imgur.com/xOTcsbj.png) Now with these two files, when we run the repl, it should now also play back how our policy did. With this in place, let’s try to find an optimal policy! ![Policy Replay Gif](https://media.giphy.com/media/1ZuOyqmDLilxJrsQi0/giphy.gif) ## Policy Search In our first pass, we simply randomly picked one policy, but what if we picked a handful of policies, and only kept the one that did the best? Let’s go back to the part where we play the policy, and instead of just generating one, let’s write a loop to generate a few and keep track of how well each policy did, and save only the best policy. We’ll first create a tuple called `max` that will store the score, observations, and policy array of the best policy we’ve seen so far. ```python max = (0, [], []) ``` Next we’ll generate and evaluate 10 policies, and save the best policy in max. ```python for _ in range(10): policy = np.random.rand(1,4) score, observations = play(env, policy) if score > max[0]: max = (score, observations, policy) print('Max Score', max[0]) ``` We’ll also have to tell our /data endpoint to return the replay of the best policy. ```python @app.route("/data") def data(): return json.dumps(observations) ``` should be changed to ```python @app.route("/data") def data(): return json.dumps(max[1]) ``` Your main.py should look something like [this now](https://gist.github.com/MikeShi42/3c270ce2d13f2709ef2d5983492a1693). If we run the repl now, we should get a max score of 500, if not, try running the repl one more time! We can also watch the policy balance the pole perfectly fine! Wow that was easy! ## Not So Fast Or maybe it isn’t. We cheated a bit in the first part in a couple of ways. First of all we only randomly created policy arrays between the range of 0 to 1. This just *happens* to work, but if we flipped the greater than operator around, we’ll see that the agent will fail pretty catastrophically. To try it yourself change `action = 1 if outcome > 0 else 0` to `action = 1 if outcome < 0 else 0`. This doesn’t seem very robust, in that if we just happened to pick less than instead of greater than, we could never find a policy that could solve the game. To alleviate this, we actually should generate policies with negative numbers as well. This will make it more difficult to find a good policy (as a lot of the negative ones aren’t good), but we’re no longer “cheating” by fitting our specific algorithm to this specific game. If we tried to do this on other environments in the OpenAI gym, our algorithm would definitely fail. To do this instead of having `policy = np.random.rand(1,4)`, we’ll change to `policy = np.random.rand(1,4) - 0.5`. This way each number in our policy will be between -0.5 and 0.5 instead of 0 to 1. But because this is more difficult, we’d also want to search through more policies. In the for loop above, instead of iterating through 10 policies, let’s try 100 policies by changing the code to read `for _ in range(100):`. I also encourage you to try to just iterate through 10 policies first, to see how hard it is to get good policies now with negative numbers. Now our main.py should look [like this](https://gist.github.com/MikeShi42/e1c5551bbf2cb2064da962ad8b198c1b) If you run the repl now, no matter if we’re using greater than or less than, we can still find a good policy for the game. ## Not So Fast Pt. 2 But wait, there’s more! Even though our policy might be able to achieve the max score of 500 on a single run, can it do it every time? When we’ve generated 100 policies, and pick the policy that did best on its single run, the policy might’ve just gotten very lucky, and in it could be a very bad policy that just happened to have a very good run. This is because the game itself has an element of randomness to it (the starting position is different every time), so a policy could be good at just one starting position, but not others. So to fix this, we’d want to evaluate how well a policy did on multiple trials. For now, let’s take the best policy we found from before, and see how well it’ll do on 100 trials. ```python scores = [] for _ in range(100): score, _ = play(env, max[2]) scores += [score] print('Average Score (100 trials)', np.mean(scores)) ``` Here we’re playing the best policy (index 2 of `max`) 100 times, and recording the score each time. We then use numpy to calculate the average score and print it to our terminal. There’s no hard published definition of “solved”, but it should be only a few points shy of 500. You might notice that the best policy might actually be subpar sometimes. However, I’ll leave the fix up to you to decide! ## done=True Congrats! 🎉 We’ve successfully created an AI that can solve cart pole very effectively, and rather efficiently. Now there’s a lot of room for improvement to be made that’ll be part of an article in a later series. Some things we can investigate more on: - Finding a “real” optimal policy (will do well in 100 separate plays) - Optimizing the number of times we have to search to find an optimal policy (“sample efficiency”) - Doing a proper search of the policy instead of trying to just randomly pick them. - Solving [other environments](https://gym.openai.com/envs/#classic_control). If you’re interested in experimenting more with ML with pretrained models and out-of-the-box working code, check out [ModelDepot](https://modeldepot.io)!
0
posted to Learn by MikeShi42 (60) 7 days ago
Pinned
What is Learn? Guidelines - Read Me
# What is Repl Talk Learn? Repl Talk Learn is the board where tutorials will be posted. Come here to learn how to build cool things on Repl.it! # I have a tutorial! What do I do? If you want to post a tutorial, check in with any of the Repl.it team members, or just post here as a comment. This board will be moderated - posts that are not approved will be moved to [Share](https://repl.it/talk/share).
2
posted to Learn by timmy_i_chen (318) 7 days ago
Pinned
55
Rules for Posting - Read me!
Some rules and guidelines to keep in mind as you share your great work on our boards: 1 - Be kind and courteous to others 2 - Make sure that any feedback you provide is constructive. 3 - Outside links are allowed, but you must provide the source. Ideally, things that you post will have been created on Repl.it. 4 - Avoid posting overly promotional material - the focus is, and always will be, a programming, learning, and collaborative community. :) 5 - Don't spam / keep things SFW (Safe For Work). We may revoke your access to these boards if you are found to be in violation of any of these rules. Feel free to ask clarifying questions. Last updated 7/10/18 12:09 PST
29
posted to Share by timmy_i_chen (318) 3 months ago
Pinned
11
PyLisp: LISP in Just Over 100 Lines of Python
Hi everyone, I recently posted a [little LISP](https://repl.it/@ericqweinstein/Lisplet) in JavaScript (with an [accompanying tutorial](https://medium.com/@eric.q.weinstein/lisp-repl-it-tutorial-9a8f2d7d7584)). People seemed to like it, so here's another, slightly improved version: [this time in Python](https://repl.it/@ericqweinstein/PyLisp)! This tutorial is very similar to the JavaScript one, so I recommend you read that one first (at least to get a sense of how our program turns text into tokens, then parses the resulting structure to figure out what to do). I figured I'd do a repl talk post to highlight the differences between the Python version and the JS one. First, our Python version includes three LISPy functions: `car` (which gets the head, or first element, of a list), `cdr` (which gets the tail of the list: everything except the first element), and `cons` (which prepends an element to a list): ```py def __init__(self): self.env = { '==': lambda args: args[0] == args[1], '!=': lambda args: args[0] != args[1], '<': lambda args: args[0] < args[1], '<=': lambda args: args[0] <= args[1], '>': lambda args: args[0] > args[1], '>=': lambda args: args[0] >= args[1], '+': lambda args: reduce((lambda x, y: x + y), args), '-': lambda args: reduce((lambda x, y: x - y), args), '*': lambda args: reduce((lambda x, y: x * y), args), '/': lambda args: reduce((lambda x, y: x / y), args), 'car': lambda args: args[0][0], 'cdr': lambda args: args[0][1:], 'cons': lambda args: [args[0]] + args[1] } ``` Try them out in the REPL to see how they work! What do you think `(car (quote (1 2 3)))` will evaluate to? The rest of the functions (`run()`, `parse()`, `tokenize()`, `read()`, and `atom()`) are almost identical to their JavaScript versions, so let's skip ahead to the differences in our `eval()` function. As before, we `return` if we don't have an expression to evaluate, and we return number and string literals when we see them. The major difference is that this `eval()` takes a second parameter, an optional environment hash, and uses that to look up terms that have been defined (including literals, as in `(define pi 3.14159)`, and function names, as in `(define square (lambda (x) (* x x)))`). Skipping over our `'quote'` branch for a moment, we see that the rest of our functions use this `env` to figure out what to do. Our `'if'` branch (which handles conditionals, such as `(if (< 2 3) 'yup' 'nope')` pulls out the test expression (in this example, `(< 2 3)`) and evaluates it, returning the first option (`'yup'`) if the expression evaluates to `True` and the second option (`'nope'`) if it's `False`. We use the `'define'` branch to add new identifiers to our environment (for naming variables and functions). The next two branches, which handle lambdas and function calls, respectively, are a little tricky, so we'll go through them in slightly more detail. In the `'lambda'` branch, we first separate the expression into the lambda keyword (which we assign to `_`, meaning we're not going to use it in our evaluation), the parameters to the function (`var`), and function body (`e`). We then return a Python `lambda`, recursively calling `self.eval()` on the function body, passing the bound arguments and parameters as the second argument. `dict(zip(params, args))` turns two lists (such as `['x', 'y']` and `[1, 2]`) into a dictionary, like `{'x': 1, 'y': 2}`, which represents the parameters our function accepts bound to the arguments the user passes in. The final `else` branch handles function calls: the function name (or operator, for arithmetic) is the first argument, and the arguments to the function are the rest of the expression. We go ahead and evaluate each argument to get its value before calling the function name with these values, ultimately turning something like `(square 10)` into `(* 10 10)`, which is `100`. Last but not least, the aforementioned `'quote'` branch handles list quoting: since function invocations are lists in LISP, we define list literals using `quote`. `(quote (1 2 3))` should return the Python list `[1, 2, 3]`, which means we can use list functions like `car`, `cdr`, and `cons`. What do you get if you type the following? ```lisp (define pi 3.14159) (define square (lambda (x) (* x x))) (define circle-area (lambda (r) (* pi (square r)))) (circle-area 100) ``` So there we have it: a little LISP in just over 100 lines of Python. Feel free to fork this REPL and add your own improvements!
0
posted to Learn by ericqweinstein (69) 5 days ago
Pinned
9
Weekly Repl Highlight #2
Welcome back to our weekly repl highlights! This week we had a ton of amazing projects. At the start of the week, we received a great tutorial. At first, we weren't completely sure if we would include tutorials in the list, but after getting a second, amazing one, we just *had* too. Now, with our honorable mentions. @Babbel [This is a great turtle project, which if you increase the speed and wait, great art is formed!](https://repl.it/talk/share/2D-Particle-Painter/6528) @ericqweinstein [This tutorial is fantastic, and we're so glad this person decided to share it! If only they posted this *before* our make a language code jam :D](https://repl.it/talk/announcements/LISP-Tutorial-Write-a-Language-with-JavaScript/6566) @MikeShi42 [Made an AMAZING artificial intelligence tutorial, because who doesn't want to raise a machine?](https://repl.it/talk/announcements/From-Scratch-AI-Balancing-Act-in-50-Lines-of-Python/6586) @Joshua18 [A while back, a "cattle trade" game was made, which ended up being a hit. This appears to be it's successor, and it really is fun!](https://repl.it/talk/share/Oil-trading-stimulation/6687) @DJWang [This is our second turtle project, but it doesn't feel like it! It's a great game, play it with all your friends!](https://repl.it/talk/share/Eating-machine-2-Players/6767) @SagaciousPan [Text Adventures are always fun, and this is no exception! My only complaint is that it isn't longer :P](https://repl.it/talk/share/Text_Adventure_Version_02/6769) Finally, the winner is..... @Battlesquid [They created this amazing game, jump of walls, and mess around. Once multiplayer is added, it will be a blast!](https://repl.it/talk/share/BOOST/6597) Thank you to everyone that's on this list, and really everyone that posts their repls. It's so much fun looking at everyone's projects. Keep posting your awesome repls, and I'll see you all next week!
0
posted to Announcements by 21natzil (97) 1 day ago
1
python datetime minute error
https://repl.it/@DanielSchumache/ClickClock When it prints the time, the minutes say, "<attribute 'minute' of 'datetime.datetime' objects>," and I don't know why. Please help.
1
posted to Ask by DanielSchumache (0) about 2 hours ago
2
Drawing 3D shapes in a 2D Canvas with JS
Hey guys! In this Medium tutorial, i'll be showing you how to project a 3D cube (it's vertices) into a 2D canvas! https://medium.com/@caleblolhk/drawing-3d-shapes-in-a-2d-canvas-with-js-58ab9c4fd178
0
posted to Learn by JSer (812) about 6 hours ago
2
Python - Basic Stuff
I have 3 different code but I want to put them together in one. I am terrible at code! They are all different, although all python. When I add a file and put it there on the same code it doesn't run. Please Help!
1
posted to Ask by AaronG1 (1) about 10 hours ago
2
vigenere cipher
I'm very new to computer programming. I'm currently taking a beginners class, and I'm a little lost. My teacher has asked me to write a function for a vigenere cipher, and I can't figure out how to make the key repeat when the message is length is greater than the key length. I've already written a cease cypher, so I have a pretty decent idea of how to program most of the encryption. Could anyone shed some light on this for me ? Also, I'm not supposed to use the chr or ord functions.
4
posted to Ask by jcrawf20 (1) about 18 hours ago
2
Power of 2
https://repl.it/@jessberr/Power-of-2 Any hints of how to make this function print out an array of values like [1,3,9] instead of [3^0, 3^1, 3^2]. I'm sure it's simple, but I've gone down a rabbit hole trying to figure out what to do. Would I use something like reduce? Or would it be changing line 5 to push something different? Thanks for everyone's patience?
1
posted to Ask by jessberr (4) about 15 hours ago
3
Text Adventures
Can someone help me figure out why this code doesn't work?
3
posted to Ask by Coolguy975 (2) about 23 hours ago
2
New JS repl creates a new NodeJS repl instead
Clicking on https://repl.it/languages/javascript to create a new JS repl leads to a new NodeJS repl as if I clicked on https://repl.it/languages/nodejs Is this normal?
2
posted to Ask by AdrianSkar (1) about 22 hours ago
3
repl.it mobile feature req
Doubt this is the right place for this. But can we get overflow hidden on the mobile site so we can scroll without chrome reloading the page on swipedown? Silly chrome feature.....
0
posted to Ask by edounn (2) about 20 hours ago
5
Self-writing code
What if a computer could write its own code?
5
posted to Ask by mcko7304 (4) 1 day ago
5
American Flag
Thought i'd do what @amasad did and make an american flag. Like if you think it's cool!
1
posted to Share by 894238 (6) 1 day ago
2
.json file won't update? (Bug maybe?)
I have a .json file for one of my repls that has nothing but an open dictionary in it, so that when a criteria is reached, the user's info is added to the dictionary via a Python script I wrote. My problem is that the .json file will store the data when the person fulfills the first criteria it will log their info, but if a new criteria is reached after, the data does not update (I don't need code help, my script is pretty solid, so I know the error isn't there). Is this a bug?
0
posted to Ask by impulse_py (1) about 16 hours ago
2
Code auto-completion
Hi, I am using a python repl. After I create the repl, and start writing any code, the code completion suggestions come up after pressing ctrl+space. However after a couple of days, when i log in again, it stops working and i have to create another repl. Is there a solution to this?
2
posted to Ask by SuvojitDhole (1) 1 day ago
2
BARBOSA
do barbosa
1
posted to Share by FelipeVenerato (1) 1 day ago
2
dividing
how do you divide like with the symbol?
2
posted to Ask by AlexMcBride (1) 1 day ago
2
How can I use tkinter?
I'm trying to create an app with python 3.7 with tkinter and I keep on getting no Display name and no $display environment variable, is there any way I can fix this on repl.it?
1
posted to Ask by SimonbW (1) 1 day ago
3
How do I "push" something to a variabe in a file using nodeJs?
I would like to "push" an object into my msg.js file: File content: var m = [{name:"",desc:""}]; I would like to "push": {name:"test",desc:"test"} New file content: var m = [{name:"",desc:""},{name:"test",desc:"test"}];
5
posted to Ask by Coder100 (2) 2 days ago
3
Mini Project 1
How do I get my user to actually be able to input what exercise or treat and actually input it like for example it says that she can have a treat and the user then puts in Snickers, and then the program goes ahead and runs.
21
posted to Ask by Steven204322 (2) 3 days ago
1
Hi, I am new to coding and seeking assistance in to find the quadratic root
def findBiggerRoot(a, b, c): import math # if -100 <= a <= 100 and -10**5 <= b <= 10**5 and -10**6 <= c <= 10**^ coefficient = (ax**2 + bx + c) input(a, b, c) return (ax**2 + bx + c) def root(x): # formula for quadratic root is: -b +- (sqrt(*b2 - 4*a))/ 2*a f=math.pow (b2) - 4*a*b f=math.sqrt (f) x=(-b + f) / 2*a print (x) x=(-b - f) / 2*a print (x) x= abs(root(x)) return (round(root(x))) def comparissonroot(x): root= (abs(round(x))) findBiggerRoot=(x(a**2)+ bx + c) findBiggerRoot=(x < x) or (x > X) return BiggerRoot positiveroot=(x==x) return positiveroot
1
posted to Ask by shanii (1) 1 day ago
2
Is it possible to run a curses program?
I tried to run a Python curses program which works fine on my machine, on repl.it I got: Traceback (most recent call last): File "python", line 208, in <module> _curses.error: setupterm: could not find terminal
6
posted to Ask by drkvogel (2) 2 days ago
1
ending code
im trying to end my program but its not working what do i do based on user input
3
posted to Ask by masonyorston (0) 1 day ago
1
bs4 import error
I have added the bs4 package. However, while I run the 'import bs4', it saids that no module named bs4. Does any one know what happens? Thank you!
1
posted to Ask by QianyuLiang (1) 1 day ago
1
estruturas de decisão composição lógica
estruturas de decisão
1
posted to Challenge by cathilim19 (0) 1 day ago
1
how do we change the color of the text that input
abc
1
posted to Ask by mathimo (0) 1 day ago
2
3U Loops
Adomie
0
posted to Share by IsraelR (1) 1 day ago
Load more