Posted
almost 5 years
ago
by
Sam Jacskon
I am running Flask through Gunicorn and I have a big problem. I get [CRITICAL] WORKER_TIMEOUT errors. To fix this I added "preload_app = True" to my gunicorn config. Add it almost works. The problem I am having is gunicorn will not
... [More]
start unless I send the SIGINT signal. Tis only happens with preload_app.
Here is the out put:
(.venv) ubuntu@server:/opt/mayapp$ gunicorn -c etc/gunicorn.conf.py myapp:app
^C[2020-06-18 18:02:46 +0000] [23412] [INFO] Starting gunicorn 19.10.0
[2020-06-18 18:02:46 +0000] [23412] [INFO] Listening at: http://0.0.0.0:8000 (23412)
[2020-06-18 18:02:46 +0000] [23412] [INFO] Using worker: eventlet
[2020-06-18 18:02:46 +0000] [23416] [INFO] Booting worker with pid: 23416
[2020-06-18 18:02:46 +0000] [23417] [INFO] Booting worker with pid: 23417
[2020-06-18 18:02:46 +0000] [23418] [INFO] Booting worker with pid: 23418
[2020-06-18 18:02:46 +0000] [23419] [INFO] Booting worker with pid: 23419
[2020-06-18 18:02:46 +0000] [23420] [INFO] Booting worker with pid: 23420
Notice the ^C before anything happens. Unless I interrupt the wait is indefinite. In addition to not starting, when I do want to terminate, I must "pkill -9 ". It will hang for hours with any other method.
My gunicorn.conf.py:
# file gunicorn.conf.py
# coding=utf-8
# Reference: https://github.com/benoitc/gunicorn/blob/master/examples/example_config.py
import os
import multiprocessing
_ROOT = os.path.abspath(os.path.join(
os.path.dirname(__file__), '..'))
_VAR = os.path.join(_ROOT, 'var')
_ETC = os.path.join(_ROOT, 'etc')
loglevel = 'info'
# errorlog = os.path.join(_VAR, 'log/api-error.log')
# accesslog = os.path.join(_VAR, 'log/api-access.log')
errorlog = "-"
accesslog = "-"
# bind = 'unix:%s' % os.path.join(_VAR, 'run/gunicorn.sock')
bind = '0.0.0.0:8000'
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
preload_app = True
timeout = 3 * 60 # 3 minutes
keepalive = 24 * 60 * 60 # 1 day
capture_output = True
debug = True
My Flask App:
import eventlet
eventlet.monkey_patch()
from flask import Flask, render_template, make_response
from flask_session import Session
from flask_cors import CORS
import os
app = Flask(__name__)
GEVENT_SUPPORT = True
cors = CORS(app)
app.secret_key = os.urandom(24)
app.config['SESSION_TYPE'] = 'filesystem'
app.config['SESSION_KEY_PREFIX'] = "myapp"
app.config['MAX_CONTENT_LENGTH'] = 5 * 1024 * 1024
app.config['GOOGLE_LOGIN_REDIRECT_SCHEME'] = "https"
if os.getenv('MYAPP', 'PROD') == 'DEV':
app.debug = True
app._static_folder = os.path.abspath("./app/templates/prod/static/")
if os.getenv('VTABLE', 'PROD') == 'DEV':
app._static_folder = os.path.abspath("./app/templates/dev/static/")
Session(app)
from app import routes
import app.sock as sock
if __name__ == 'app':
if os.getenv('VTABLE', 'PROD') == 'DEV':
sock.socketio.run(app, host='0.0.0.0', port=5000, keyfile='key.pem', certfile='cert.pem')
else:
sock.socketio.run(app, host='0.0.0.0', port=5000)
The beginning of app/sock.py:
from flask_session import Session
from flask_socketio import SocketIO
from flask_socketio import send, emit, join_room, leave_room
from cachelib.file import FileSystemCache
from engineio.payload import Payload
from engineio.async_drivers import eventlet
from werkzeug.middleware.proxy_fix import ProxyFix
import json
import random
import time
import uuid
from uuid import UUID
from app import app
from app import routes
Session(app)
socketio = SocketIO(app, async_mode='eventlet')
session = routes.session
@socketio.on('player_join')
def on_join(player_key):
join_room(player_key)
[Less]
|
Posted
almost 5 years
ago
by
daniel william
I am trying to using python socket io library with django. For i am also using eventlet but problem is comming where my all static files not working. It is displaying not Not Found static file
import os
from django.core.wsgi import
... [More]
get_wsgi_application
import socketio
from apps.chatbox.views import sio
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "chatbot.settings")
django_app = get_wsgi_application()
application = socketio.Middleware(sio, wsgi_app=django_app, socketio_path='socket.io')
import eventlet
import eventlet.wsgi
eventlet.wsgi.server(eventlet.listen(('', 8000)), application)
Error
Not Found: /static/js/main.a9e46e37.chunk.js
127.0.0.1 - - [08/Jun/2020 20:28:47] "GET /static/js/main.a9e46e37.chunk.js HTTP/1.1" 404 1976 0.017001
Not Found: /static/js/4.f274b99f.chunk.js
127.0.0.1 - - [08/Jun/2020 20:28:47] "GET /static/js/4.f274b99f.chunk.js HTTP/1.1" 404 1967 0.021015
Not Found: /static/js/main.a9e46e37.chunk.js
127.0.0.1 - - [08/Jun/2020 20:28:47] "GET /static/js/main.a9e46e37.chunk.js HTTP/1.1" 404 1976 0.010007
Without bellow code in wsgi file, everything works better
import eventlet
import eventlet.wsgi
eventlet.wsgi.server(eventlet.listen(('', 8000)), application)
[Less]
|
Posted
almost 5 years
ago
by
Kay
How should I translate app.run() to sockio.run() with SSL?
I have below app start code to run with flask development server
if __name__=='__main__':
app.run(ssl_context=(ssl_cert, ssl_key))
I am now trying to start it with
... [More]
socketio like below
if __name__=='__main__':
socketio.run(app, host='0.0.0.0', port=80, debug=True)
However, I cannot figure out how to pass cert into this call.
What do I have to do to make this work?
[Less]
|
Posted
almost 5 years
ago
by
Neil
My main celery app is running in AWS in an EC2 instance ("main"), and the tasks it generates interact with an RDS database in the same AZ ("db"). The workload generates up to thousands of tasks every minute and I need to execute them in
... [More]
parallel as quickly as possible. I have workers consuming tasks in two physical locations. One from a separate EC2 instance ("worker EC2") from main but in same AZ as it and db, and one from a physical machine in our office's private data center ("worker local").
Both worker EC2 and local were running prefork event pooling with autoscale==70,4 and were working fine (tasks completing in 2-3s), but CPU and memory usage was high and I need even more parallelism if possible. So I've been experimenting with eventlet and gevent with concurrency=100. I am stuck on the following issues:
Eventlet on worker EC2 task completion time (1s) is faster than prefork (2-3s). However, on worker local, task completion time is much slower (17-25s) and most of the time it is stuck performing simple queries to the db and I/O with a local Redis cache. CPU and memory are not heavily taxed by the process. The same issues on worker local occur with Gevent, but more severe. Tasks complete in 200+ seconds.
Eventlet on worker EC2 task completion time is faster, but it doesn't seem to be using full concurrency. Though each task is completing within 1-1.5 seconds I never see more than 10-15 tasks completed within any given one second window. So it seems full concurrency is not being utilized. CPU is 60-80% utilized and memory less than 50%, so not being heavily taxed.
Local worker running both Gevent and Eventlet frequently goes idle even with tasks in the queue. Once I restart the worker it starts consuming tasks again for some time, then goes idle again. I notice that in these scenarios the worker gets BrokenPipe error. NOTE this does NOT happen with prefork, so I don't think it's the connection/network but rather something to do with the pooling package. This behavior is worse for Gevent than Eventlet, but both are bad
Questions:
Why would Eventlet and Gevent be performing differently between the local and EC2 workers? Only difference between these two workers is that one is physically closer to the main celery app and the DB, would that affect how green threads execute?
How to get Eventlet to actually utilize the full concurrency setting?
Why would Gevent take so long to access in-memory cache?
How can I get Eventlet/Gevent on remote worker to consistently consume tasks?
Note NONE of these issues occur with the exact same setup with same workers using prefork. The only issue with prefork is that the worker machines' CPU and memory are heavily taxed with even 70 concurrency, and I would like it to be closer to 700.
[Less]
|
Posted
almost 5 years
ago
by
aescript
Im adding a websocket server to my app so it can communicate with a web-based version of itself.
For this I am using eventlet.
The problem i am running in to is once I start the server, I cannot get it to die. Even if i close my
... [More]
application, the process remains running in the background while the server stays alive. I have been googling and testing random things for days now and just cannot get this to happen. Im hoping someone here can help me.
Currently, i have a singleton class with a function that starts the listening:
def run_server(self, addr, port):
eventlet.wsgi.server(eventlet.listen((addr, port)), self.APP)
and for info, the app is:
socketio.WSGIApp(SIO, static_files={'/': {'content_type': 'text/html',
'filename': 'index.html'}})
and then I am starting this server on a thread in the app:
def run_on_thread(addr, port):
obj = WebSocket()
t = threading.Thread(target=obj.run_server, args=(addr, port))
t.setDaemon(True)
return obj, t
The thread gets started in my application, and as mentioned - everything works fine. All my messages are sent and received.
But nothing I can find will kill this server until i End Task on the python process.
[Less]
|
Posted
almost 5 years
ago
by
Rishi Prasad
I am running a Flask app on Google App Engine using Gunicorn's async workers.
Every time requests come in, after the last request is finished responding, I get the following message and my gunicorn workers exit. Then, theres a slight
... [More]
delay when the next batch of requests come in.
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [7] [INFO] Handling signal: term
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [7] [INFO] Handling signal: term
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [21] [INFO] Worker exiting (pid: 21)
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [20] [INFO] Worker exiting (pid: 20)
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [18] [INFO] Worker exiting (pid: 18)
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [14] [INFO] Worker exiting (pid: 14)
2020-05-17 16:57:14 default[20200517t125405] [2020-05-17 16:57:14 +0000] [19] [INFO] Worker exiting (pid: 19)
Here is my app.yaml
runtime: python37
entrypoint: gunicorn --worker-class eventlet -c gunicorn.conf.py -b :$PORT main:app preload_app=True
instance_class: F2
Here is my gunicorn.conf.py file
import multiprocessing
workers = (multiprocessing.cpu_count()) * 2 + 1
threads = workers # originally didn't have this, just had the workers var defined, but tried this and it also didn't solve the problem
I tried searching SO and some other sources but can't find a workaround for this.
[Less]
|
Posted
almost 5 years
ago
by
Máté Eckl
Long story short: My requests sometimes get interrupted by a timer and I have no idea exactly why. Could anyone explain? Thanks in advance.
Long version:
I am implementing an adaptive QoS application on Ryu in which I reset switchport
... [More]
queue values periodically. Due to probably the specific OVS implementation, these requests take a really long time, during which the response processing sometimes gets interrupted by a timer.
I am making HTTP calls against the rest_qos app which ships with Ryu and that sets a timer and I still cannot find out what triggers it. Sometimes the timer doesn't throw an exception during an API call lasting 60 seconds, sometimes it does after just a few seconds.
The specific stack trace is available here.
The part of my app that initiates this call can be found under this link at the set_queues function.
[Less]
|
Posted
almost 5 years
ago
by
Caio Filus
i am implementing a email sender im my flask project who uses Flask-SocketIo Asynchronous with Eventlet, my basic code is like this:
eventlet.monkey_patch(all=True)
app = Flask(__name__, template_folder="Templates")
socket: SocketIO =
... [More]
SocketIO(app, async_mode="eventlet", json=json)
@app.route('/')
def main():
msg = Message(subject="Hello",
sender=('Muninn', '[email protected]'),
recipients=["[email protected]"],
body="Muninn system online")
print(mail.send(msg))
return 'Muninn Online'
if __name__ == '__main__':
socket.run(app, host='0.0.0.0', port=80, debug=True)
When i access localhost/ eventlet gives me this error:
wrap_socket() got an unexpected keyword argument '_context'
Traceback (most recent call last):
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\flask\app.py", line 1832, in full_dispatch_request
rv = self.dispatch_request()
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\flask\app.py", line 1818, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "D:\Projetos\Hello\MuninnServer\main.py", line 70, in main
print(mail.send(msg))
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\flask_mail.py", line 491, in send
with self.connect() as connection:
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\flask_mail.py", line 144, in __enter__
self.host = self.configure_host()
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\flask_mail.py", line 156, in configure_host
host = smtplib.SMTP_SSL(self.mail.server, self.mail.port)
File "C:\Users\caiof\AppData\Local\Programs\Python\Python38-32\lib\smtplib.py", line 1034, in __init__
SMTP.__init__(self, host, port, local_hostname, timeout,
File "C:\Users\caiof\AppData\Local\Programs\Python\Python38-32\lib\smtplib.py", line 253, in __init__
(code, msg) = self.connect(host, port)
File "C:\Users\caiof\AppData\Local\Programs\Python\Python38-32\lib\smtplib.py", line 339, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "C:\Users\caiof\AppData\Local\Programs\Python\Python38-32\lib\smtplib.py", line 1042, in _get_socket
new_socket = self.context.wrap_socket(new_socket,
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\eventlet\green\ssl.py", line 438, in wrap_socket
return GreenSSLSocket(sock, *a, _context=self, **kw)
File "D:\Projetos\Hello\MuninnServer\venv\lib\site-packages\eventlet\green\ssl.py", line 67, in __new__
ret = _original_wrap_socket(
TypeError: wrap_socket() got an unexpected keyword argument '_context'
wrap_socket() got an unexpected keyword argument '_context'
When i remove eventlet.monkey_patch(all=True) i can send the email normally
[Less]
|
Posted
about 5 years
ago
by
DJ E.T
I have a Flask_SocketIO app that supposed to implement a chat groups system.
The client thats interacting with it is a flutter app.
I wrote a test to see if the socketio events are working. it worked once, but than stopped.
the server
... [More]
is getting the client's emits but not emitting back to the client.
also, the connection related evets (connect, disconnect, error) seems to be fully working. the client's callbacks on these events are called.
My flutter test client:
void main() async {
setupLocator();
final api = locator();
final socketService = locator();
Message msg = Message(
msgId: null,
content: "Hello!",
senderId: 1,
senderName: 'tair',
sendtime: DateTime.now().toString(),
groupId: 1);
runApp(
MaterialApp(
home: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
RaisedButton(
child: Text("Disconnect"),
onPressed: () {
socketService.leave(1, 1);
}),
RaisedButton(
child: Text("Send"),
onPressed: () {
socketService.sendMessage(msg);
},
),
RaisedButton(
child: Text("Connect"),
onPressed: () {
socketService.connect(1, 1, (data) {
print('Data!');
print(data);
});
},
),
],
),
),
),
);
SharedPreferences.setMockInitialValues({});
await api.login('tair', '1234');
await socketService.getToken();
socketService.connect(1, 1, (data) {
print('Data!');
print(data);
});
Api: A class that's interacting with rest api. not related
SocketService: A class that's emitting and listening to events. im giving the connect() method
parameters in order to join a socketio room on the server side
locator: Dependency injecttion using pub package get_it. also not related
Here's the events on my Server:
@sock.on('join')
def join_group_room(data):
print(data)
token = data['token']
if token in user_token.keys():
group = Group.query.filter_by(id=int(data['groupId'])).first()
if group.temp_participants is None:
group.temp_participants = data['userId'] + ','
else:
group.temp_participants += data['userId'] + ','
db.session.commit()
join_room(data['groupId'])
#print(rooms())
else:
emit('error', 'Invalid token')
@sock.on('message')
def message_room(data):
print(data)
token = data['token']
if token in user_token.keys():
message = Message(content=data['message'], groupid=int(data['groupId']), username=user_token[token],
datetime=data['datetime'])
db.session.add(message)
db.session.commit()
participants = Group.query.filter_by(id=message.groupid).first().participants.split(",")
temp_participants = Group.query.filter_by(id=message.groupid).first().temp_participants.split(",")
for participant in participants:
if participant not in temp_participants:
pushbots.push_batch(platform=pushbots.PLATFORM_ANDROID,
alias=participant,
msg='A new message arrived', payload={'data': {'message': message.content,
'messageId': message.id,
'username': user_token[
token],
'datetime': message.datetime,
'groupId': message.groupid,
'userId': User.query.filter_by(
username=user_token[
token]).first().id}})
print("Emitting")
emit('message', {'message': message.content, 'messageId': message.id,
'username': user_token[token], 'datetime': message.datetime,
'groupId': message.groupid,
'userId': User.query.filter_by(username=user_token[token]).first().id},
room=message.groupid)
sock.sleep(0)
else:
emit('error', 'Invalid token')
@sock.on('leave')
def leave_group_room(data):
print(data)
token = data['token']
if token in user_token.keys():
group = Group.query.filter_by(id=int(data['groupId'])).first()
group.temp_participants = str(group.temp_participants.split(",").remove(data['userId'])).strip('[]')
db.session.commit()
leave_room(data['groupId'])
emit('error', 'Invalid token')
Im using eventlet as async_mode for the socketio app. i looked up online for solutions and many people said i should add the following line to the main script:
import eventlet
eventlet.monkey_patch()
Also, according to my partner on this project, the events are working fine on his machine
for further explaination, here is the link to my git repo, so you can watch the whole code: My git repo (its on integration/ClientServer branch)
Thanks for helping!
[Less]
|
Posted
about 5 years
ago
by
Rupak Banerjee
I have the following code :
import pika
import os
import eventlet
from eventlet.green import threading
pika = eventlet.import_patched('pika')
eventlet.monkey_patch()
#More Code
if __name__=='__main__'
eventlet.spawn(pika_client)
... [More]
socketio.run(app, host='192.168.1.214')
def pika_client():
global connection, channel
params = pika.ConnectionParameters(heartbeat=500,
blocked_connection_timeout=300)
connection = pika.BlockingConnection(params)
channel = connection.channel()
return 1
However, the pika connection gets disconnected after 20-30 mins.
Any help will be highly appreciated.
[Less]
|