Posted
about 5 years
ago
by
NewBee
Eventlet monkey patch seems breaking py3 select.poll() on my ENV (i try to install openstack ironic), But openstack group could not reproduce this issue, anyone knows why?
I can simply reproduce it by:
Python 3.6.9 (default, Nov 7
... [More]
2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import eventlet
>>>
>>> eventlet.monkey_patch()
>>> import select
>>> select.poll
Traceback (most recent call last):
File "", line 1, in
AttributeError: module 'select' has no attribute 'poll'
>>> eventlet.version_info
(0, 25, 0)
>>>
[Less]
|
Posted
about 5 years
ago
by
Maximilian Trauboth
I receive following output:
Traceback (most recent call last):
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/connection.py", line 1192, in get_connection
raise ConnectionError('Connection has data')
... [More]
redis.exceptions.ConnectionError: Connection has data
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/hubs/hub.py", line 457, in fire_timers
timer()
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/hubs/timer.py", line 58, in __call__
cb(*args, **kw)
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/greenthread.py", line 214, in main
result = function(*args, **kwargs)
File "crawler.py", line 53, in fetch_listing
url = dequeue_url()
File "/home/ec2-user/WebCrawler/helpers.py", line 109, in dequeue_url
return redis.spop("listing_url_queue")
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/client.py", line 2255, in spop
return self.execute_command('SPOP', name, *args)
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/client.py", line 875, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/connection.py", line 1197, in get_connection
raise ConnectionError('Connection not ready')
redis.exceptions.ConnectionError: Connection not ready
I couldn't find any issue related to this particular error. I emptied/flushed all redis databases, so there should be no data there. I assume it has something to do with eventlet and patching. But even when I put following code right at the beginning of the file, the error appears.
import eventlet
eventlet.monkey_path()
What does this error mean?
[Less]
|
Posted
about 5 years
ago
by
Maximilian Trauboth
I am trying to implement the Amazon Web Scraper mentioned here. However, I get the output mentioned below. The output repeats until it stops with RecursionError: maximum recursion depth exceeded.
I have already tried downgrading
... [More]
eventlet to version 0.17.4 as mentioned here.
Also, the requestsmodule is getting patched as you can see in helpers.py.
helpers.py
import os
import random
from datetime import datetime
from urllib.parse import urlparse
import eventlet
requests = eventlet.import_patched('requests.__init__')
time = eventlet.import_patched('time')
import redis
from bs4 import BeautifulSoup
from requests.exceptions import RequestException
import settings
num_requests = 0
redis = redis.StrictRedis(host=settings.redis_host, port=settings.redis_port, db=settings.redis_db)
def make_request(url, return_soup=True):
# global request building and response handling
url = format_url(url)
if "picassoRedirect" in url:
return None # skip the redirect URLs
global num_requests
if num_requests >= settings.max_requests:
raise Exception("Reached the max number of requests: {}".format(settings.max_requests))
proxies = get_proxy()
try:
r = requests.get(url, headers=settings.headers, proxies=proxies)
except RequestException as e:
log("WARNING: Request for {} failed, trying again.".format(url))
num_requests += 1
if r.status_code != 200:
os.system('say "Got non-200 Response"')
log("WARNING: Got a {} status code for URL: {}".format(r.status_code, url))
return None
if return_soup:
return BeautifulSoup(r.text), r.text
return r
def format_url(url):
# make sure URLs aren't relative, and strip unnecssary query args
u = urlparse(url)
scheme = u.scheme or "https"
host = u.netloc or "www.amazon.de"
path = u.path
if not u.query:
query = ""
else:
query = "?"
for piece in u.query.split("&"):
k, v = piece.split("=")
if k in settings.allowed_params:
query += "{k}={v}&".format(**locals())
query = query[:-1]
return "{scheme}://{host}{path}{query}".format(**locals())
def log(msg):
# global logging function
if settings.log_stdout:
try:
print("{}: {}".format(datetime.now(), msg))
except UnicodeEncodeError:
pass # squash logging errors in case of non-ascii text
def get_proxy():
# choose a proxy server to use for this request, if we need one
if not settings.proxies or len(settings.proxies) == 0:
return None
proxy = random.choice(settings.proxies)
proxy_url = "socks5://{user}:{passwd}@{ip}:{port}/".format(
user=settings.proxy_user,
passwd=settings.proxy_pass,
ip=proxy,
port=settings.proxy_port,
)
return {
"http": proxy_url,
"https": proxy_url
}
if __name__ == '__main__':
# test proxy server IP masking
r = make_request('https://api.ipify.org?format=json', return_soup=False)
print(r.text)
output
Traceback (most recent call last):
File "helpers.py", line 112, in
r = make_request('https://api.ipify.org?format=json', return_soup=False)
File "helpers.py", line 36, in make_request
r = requests.get(url, headers=settings.headers, proxies=proxies)
File "/home/ec2-user/env/lib64/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/ec2-user/env/lib64/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/ec2-user/env/lib64/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/ec2-user/env/lib64/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/ec2-user/env/lib64/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/ec2-user/env/lib64/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/ec2-user/env/lib64/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/home/ec2-user/env/lib64/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect()
File "/home/ec2-user/env/lib64/python3.7/site-packages/urllib3/connection.py", line 300, in connect
conn = self._new_conn()
File "/home/ec2-user/env/lib64/python3.7/site-packages/urllib3/contrib/socks.py", line 99, in _new_conn
**extra_kw
File "/home/ec2-user/env/lib64/python3.7/site-packages/socks.py", line 199, in create_connection
sock.connect((remote_host, remote_port))
File "/home/ec2-user/env/lib64/python3.7/site-packages/socks.py", line 47, in wrapper
return function(*args, **kwargs)
File "/home/ec2-user/env/lib64/python3.7/site-packages/socks.py", line 774, in connect
super(socksocket, self).settimeout(self._timeout)
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/greenio/base.py", line 395, in settimeout
self.setblocking(True)
What might be the problem here?
[Less]
|
Posted
about 5 years
ago
by
Nick
I am trying to use gunicorn and eventlet with my Python Flask-SocketIO web app. The entire thing works except for seleniumwire. When I hit a route that uses seleniumwire, I get the following error:
Exception in thread Selenium Wire
... [More]
Proxy Server:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/socketserver.py", line 232, in serve_forever
with _ServerSelector() as selector:
File "/usr/lib/python3.6/selectors.py", line 348, in __init__
self._poll = select.poll()
AttributeError: module 'select' has no attribute 'poll'
I thought it might have something to do with seleniumwire not being caught in eventlet.monkey_patch() which I call at the top of my file, so I wanted to try importing it using eventlet.import_patched(). I can't get this to work, though. I can write seleniumwire = eventlet.import_patched('seleniumwire') but seleniumwire.webdriver naturally doesn't work because webdriver is a package not a method and I can't figure out the import_patched version of "from seleniumwire import webdriver".
The app works fine when I run it without eventlet or gunicorn.
What is causing this error? If the problem is seleniumwire not being green, how do I properly import it?
[Less]
|
Posted
about 5 years
ago
by
Nick
I have a working Flask-SocketIO server but it is not working with Gunicorn. The relevant part of the server looks like this:
def main(env, resp):
app = Flask(__name__,
static_url_path='',
... [More]
static_folder='dist',
template_folder='dist')
socketio = SocketIO(app)
@app.route('/')
def home():
return app.send_static_file('index.html')
# socketio.run(app, host='0.0.0.0', port=8000) I comment this out when using Gunicorn because otherwise it tries to run the process twice and throws an error.
I am using eventlet and running the following command as described in the Flask-SocketIO docs here :
gunicorn --worker-class eventlet -b 0.0.0.0:8000 -w 1 index:main
The gunicorn process starts fine, but when I navigate to the page I get the following server error:
Error handling request /
Traceback (most recent call last):
File "/home/myusername/.local/lib/python3.6/site-packages/gunicorn/workers/base_async.py", line 55, in handle
self.handle_request(listener_name, req, client, addr)
File "/home/myusername/.local/lib/python3.6/site-packages/gunicorn/workers/base_async.py", line 113, in handle_request
for item in respiter:
TypeError: 'NoneType' object is not iterable
I cannot find any information on this error and would appreciate any ideas.
[Less]
|
Posted
about 5 years
ago
by
Alexander Mueller
I have a web application which is locally run with docker-compose. It contains Django connected to PostgreSQL, Celery and RabbitMQ.
Every time I run a shared_task I get the following DatabaseError.
worker_1 | [2020-03-17 20:03:40,931:
... [More]
ERROR/MainProcess] Signal handler > raised: DatabaseError("DatabaseWrapper objects created in a thread can only be used in that same thread. The object with alias 'default' was created in thread id 140346062341232 and this is thread id 140345963084624.")
worker_1 | Traceback (most recent call last):
worker_1 | File "/usr/local/lib/python3.7/site-packages/celery/utils/dispatch/signal.py", line 288, in send
worker_1 | response = receiver(signal=self, sender=sender, **named)
worker_1 | File "/usr/local/lib/python3.7/site-packages/celery/fixups/django.py", line 166, in on_task_prerun
worker_1 | self.close_database()
worker_1 | File "/usr/local/lib/python3.7/site-packages/celery/fixups/django.py", line 177, in close_database
worker_1 | return self._close_database()
worker_1 | File "/usr/local/lib/python3.7/site-packages/celery/fixups/django.py", line 186, in _close_database
worker_1 | conn.close()
worker_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 24, in inner
worker_1 | return func(*args, **kwargs)
worker_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 286, in close
worker_1 | self.validate_thread_sharing()
worker_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 558, in validate_thread_sharing
worker_1 | % (self.alias, self._thread_ident, _thread.get_ident())
worker_1 | django.db.utils.DatabaseError: DatabaseWrapper objects created in a thread can only be used in that same thread. The object with alias 'default' was created in thread id 140346062341232 and this is thread id 140345963084624.
The requirements.txt file looks like follows.
-i https://pypi.org/simple
asgiref==3.2.3
argh==0.26.2
celery==4.4.0
django-cors-headers==3.2.0
django==3.0
djangorestframework-jwt==1.11.0
djangorestframework==3.10.3
django-filter==2.2.0
django-storages==1.9.1
eventlet==0.25.1
gunicorn==19.9.0
pyjwt==1.7.1
pytz==2019.3
pyyaml==5.3
sqlparse==0.3.0
stripe==2.42.0
watchdog==0.10.2
docker-compose file
version: '3.7'
services:
django:
build: ./
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- db
- broker
volumes:
- ./:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
worker:
build:
context: .
dockerfile: Dockerfile.celery
command: watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=wranglab_backend.celery --pool=eventlet --concurrency=500 -l info
volumes:
- ./:/usr/src/app
depends_on:
- broker
env_file:
- ./.env.dev
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- postgres_data_backups:/backups
env_file:
- ./.env.dev.db
broker:
image: rabbitmq:latest
env_file:
- ./.env.dev
ports:
- 5672:5672
volumes:
postgres_data: {}
postgres_data_backups: {}
Dockerfile
FROM python:3.7-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add libpq
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
RUN apk --no-cache add musl-dev linux-headers g++
RUN pip install numpy && \
pip install pandas && \
pip install psycopg2
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
COPY . /usr/src/app/
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Dockerfile.celery
FROM python:3.7-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add libpq
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
RUN apk --no-cache add musl-dev linux-headers g++
RUN pip install numpy && \
pip install pandas && \
pip install psycopg2
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
Eventlet is imported within the projects init, in manage.py and in wsgi.py as follows.
import eventlet
eventlet.monkey_patch()
celery.py
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'wranglab_backend.settings')
app = Celery("wranglab_backend")
app.config_from_object('django.conf:settings', namespace="CELERY")
app.autodiscover_tasks()
I have tried to execute a task like the following within apps tasks.py as well as in the projects tasks.py, but get the error every time. I hope someone can help me out.
@shared_task()
def test_print_task(say):
time.sleep(6)
print(say)
time.sleep(3)
return say
[Less]
|
Posted
about 5 years
ago
by
DJSpar
My simple flask socketIO app, but at a time only single request is executing the method "write_message".
I am using eventlet (https://eventlet.net/), which as per documentation can handle concurrent requests.
socketio = SocketIO(app)
... [More]
@socketio.on('write-message', namespace='/message')
def write_message(data):
//long task
if __name__ == '__main__':
print("Starting socket app on port 5000")
socketio.run(app, host='0.0.0.0', port=5000)
[Less]
|
Posted
over 5 years
ago
by
Crawley
I am using Heroku to host the Flask-SocketIO server on with an eventlet worker. I have set up a simple Flask-SocketIO server in order to ensure that it works however I have had issues with getting it to work consistently. To test the
... [More]
server, I have been emitting ping from the client to the server and the server should be emitting pong.
Server-side code
from flask import Flask
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
@app.route('/')
def index():
return "Index page"
@socketio.on('ping')
def message():
emit('pong')
if __name__ == '__main__':
socketio.run(app)
Client-side code
import socketio
host = "http://{HOSTNAME}:80"
sio = socketio.Client()
@sio.event
def connect():
print("Connected")
sio.emit('ping')
@sio.event
def disconnect():
print("Disconnected")
@sio.event
def pong(data):
print("ponged")
sio.connect(host)
sio.wait()
Upon testing, the most common result when running the client is
Connected
>>>
I also receive
Connected
Disconnected
Connected
>>>
and, eventually, the expected output
Connected
ponged
Disconnected
Connected
>>>
The fact that the client does not say Disconnected before the program ends alludes that I am not doing something right, not to mention the countless attempts needed until getting a ponged message. If any additional details are needed I shall post them accordingly.
[Less]
|
Posted
over 5 years
ago
by
BigBadBenny
We have a VM handling requests with NGINX and passing requests to port 8000, we initialize our flask app with socketio.run(app, '0.0.0.0', port=8000), we initialize the socketio object as follows:
import eventlet
eventlet.monkey_patch()
... [More]
socketio = SocketIO(app, async_mode="eventlet")
Inside our app we use flask-sqlalchemy to talk to a MySQL database.
Everything works more or less as expected for a little while, but then after our request is handled our CPU usage shoots up to 100%, with python3 taking up 50% and mysqld taking up the other half.
If I remove the stuff with eventlet, the issue with mysqld persists, but instead of the socketio messages coming in one at a time they all come in at once. When I do this, and go to mysql and SHOW PROCESSLIST, I see one query if any, usually it seems to be creating sort index or sending to client, this is what seems to be eating up my cpu here.
When eventlet is enabled however, I see multiple of these processes in MySQL. I am going to be working to determine why I have the creating sort index process so frequently when not running eventlet. However, I have no idea what is happening when I AM running eventlet.
Has anyone experienced any similar problems?
[Less]
|
Posted
over 5 years
ago
by
JasonGenX
Testing BugSnag with our Django based REST-API server. I have an API endpoint that crashes on purpose just to test it, someone in a certain serializer my views use.
In my own settings.py I have:
BUGSNAG = {
'api_key':
... [More]
'[redacted]',
'app_version': "1.0",
'project_root': "/path/to/my/project/folder/where/manage.py/is",
'release_stage': "development",
'notify_release_stages': ['development', 'staging', 'production']
}
MIDDLEWARE = (
'bugsnag.django.middleware.BugsnagMiddleware',
)
When I run my server like this: gunicorn myproj.wsgi -b 0.0.0.0:8000 --reload or python manage.py runserver BugSnag reports all crashes correctly.
However, when I use gunicorn myproj.wsgi -b 0.0.0.0:8000 --reload --worker-class eventlet BugSnag stops sending bug reports when exception occurr. The only clue I have for this behavior is this:
2020-02-08 02:34:37,363 - [bugsnag] ERROR - Notifying Bugsnag failed wrap_socket() got an unexpected keyword argument '_context'
Why does BugSnag stop working when gunicorn is used with the eventlet worker class? I am completely at a loss here. There is zero references to the subject online as if this problem only occurs on my computer.... not very encouraging.
[Less]
|