We have largely been using UTF8 for our databases for some time. However, if we
want to support emojis or other more exotic characters, we need to start using
utf8mb4. In fact, our Rails projects are set up to require that character set.
So while we are upgrading databases versions (from the EOL MySQL 5.6 to MySQL
8), we decided to upgrade the character sets of our databases at the same time.
I initially tried to configure our database with collation_server as
utf8mb4_0900_ai_ci to get all the latest unicode goodness, but AWS said I
couldn’t set collation_server to that value. The closest option was
utf8mb4_unicode_ci so I went with that. Once I got into the database, I found
the following settings:
So from that we see databases should automatically be created with character set
utf8b4 and collation utf8mb4_0900_ai_ci because that is the default collation
for the character set utf8mb4. However, I suspect I may want to set the
character set (and collation) for the client and connection to utf8b4.
Changing the values in my terraform aws_db_parameter_group did not change the
values in the running database.
Authentication plugin
I was expecting the password hashing mechanism for the new database to be
caching_sha2_password since I had read that was the new method for MySQL8. But
when I created my first user on the new database, I found they had their
password stored as mysql_native_password.
The Django docs about using MySQL say you need to load the timezone
tables
when creating a new MySQL instance. Both the docker containers we use in dev and
the AWS RDS MySQL instances take care of loading this data automatically.
Notes from MySQL 8 Docs
https://dev.mysql.com/doc/refman/8.0/en/charset-server.html The server character
set and collation are used as default values if the database character set and
collation are not specified in CREATE DATABASE statements. They have no other
purpose.
The values you see for show variables like '%collation%'; will vary depending
on the session details. The database you are in (e.g. one of the user schemas vs
in mysql or information_schema) will affect what you see for collation_database
and character_set_database. And how you connected will affect the values you see
for collation_connection and character_set_connection, character_set_results,
and character_set_client. It looks to me like the controlling variable there is
character_set_client.
We could probably use a single database with multiple schemas for all our development work. That’s
how we have been doing it for years. But sometimes that leads to us making compromises - like
running a slightly different version of the database in dev vs production. And it leads to being
somewhat conservative when trying out different database options. Now that we are using Docker for
deploying our applications, it makes sense to use docker-compose to create all the services our apps
use: relational databases, ElasticSearch, caches. The docker-compose file also manages mounted
volumes (for preserving the data in our development databases) AND a private network for each set of
containers.
Version 1
Creating a database and user
If we want to take full advantage of docker-compose’s automated workflow, we need to be able to
recreate our databases - including loading dev data - automatically. The official MySQL database
container image supports a really easy way to do this. If you set a
handful of environment variables in your docker-compose.yml file, the first time you start the
database container, it will create the database instance and set the root password to the value from
MYSQL_ROOT_PASSWORD. If you include values for MYSQL_DATABASE, MYSQL_USER and
MYSQL_PASSWORD, the first startup of the database will create that database and grant all
privileges on that database to the specified user. Excellent! That gets us most of the way there.
Now if we could only load some initial data….
Loading initial data
The MySQL image provides for this too. In the section “Initializing a fresh instance”:
When a container is started for the first time, a new database with the specified name will be
created and initialized with the provided configuration variables. Furthermore, it will execute
files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files
will be executed in alphabetical order. You can easily populate your mysql services by mounting a
SQL dump into that directory and provide custom images with contributed data. SQL files will be
imported by default to the database specified by the MYSQL_DATABASE variable.
So the database section of our standard docker-compose.yml looks like:
mysql:image:mysql:8.0.23container_name:"db-vote"ports:-"3306:3306"environment:MYSQL_ROOT_PASSWORD:root_passwordMYSQL_DATABASE:voteMYSQL_USER:voteMYSQL_PASSWORD:passwordcap_add:-SYS_NICE# removes error messages like: mbind: Operation not permittedvolumes:-./sql/docker/mysql-data:/var/lib/mysql-./sql/docker/my.cnf:/etc/mysql/conf.d/dev.cnf-./sql/docker:/docker-entrypoint-initdb.d
And our project’s sql/docker/ directory has:
sql/docker/my.cnfinitial_data.sqlmysql-data/....various files and directories for the actual database
And don’t forget to exclude that mysql-data directory from your image by including it in your
.dockerignore file:
.git.idea# Ignore the mysql data files that the dev mysql server creates.sql/docker/mysql-data
Version 2a - custom user creation
The stock setup above works great - until you have some super old code that you don’t want to have
to upgrade to recent MySQL libraries. If you want to connect those projects to MySQL 8, you need to
tell the database that this user will be using an older authentication plugin. So you need to issue
the user create yourself. One option is to only do the user creation and privilege setting in your
script, leaving the database creation and loading as above. Put the following in
sql/docker/init.sql:
And then remove MYSQL_USER and MYSQL_PASSWORD from the docker-compose.yml environment.
Version 2b - fully custom script
The version above works fine, but if you need additional configuration of the database - or just
want to take full control of the user and database creation and data loading - you can use the
following.
First, move the initial_data.sql file out of the project’s sql/docker/ directory:
sql/initial_data.sqldocker/my.cnfmysql-data/....various files and directories for the actual database
Then remove MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD variables from your
docker-compose.yml and add another volume to mount the directory where you put initial_data.sql. I
just moved the file up one level and then mounted /sql as /sql_data in the container. The
docker-compose.yml should now look like this:
mysql:image:mysql:8.0.23container_name:"db-vote"ports:-"3306:3306"environment:MYSQL_ROOT_PASSWORD:root_passwordcap_add:-SYS_NICE# removes error messages like: mbind: Operation not permittedvolumes:-./sql/docker/mysql-data:/var/lib/mysql-./sql/docker/my.cnf:/etc/mysql/conf.d/dev.cnf-./sql/docker:/docker-entrypoint-initdb.d-./sql:/sql_data
Then, in sql/docker/init.sql, create your user and database; then load your data from the mounted file:
I have been (was?) a longtime emacs user - ever since I got a Unix account in
graduate school. In the first dot com boom, I worked for a company that was
mainly populated by emacs users and picked up a few tricks - including using
emacs without a mouse. Unlike most emacs users, I never did very much
customization. So in some people’s eyes, I was never really an emacs power user.
But that did have the advantage that I could sit down to any computer, install a
fresh emacs build, and immediately be completely comfortable with the
environment.
Well…. except for the fact that on a Mac keyboard, the alt key (on a Mac,
that’s the option key), is in the wrong place. For a decade, I simply swapped
the keyboard bindings for the option and command keys. Worked great for me but
it meant that no one could sit down to my keyboard and do anything. A few years
ago I started working with a new team and thought we might do some pair
programming. I didn’t want to learn Vim and I couldn’t expect them to switch to
Emacs, so we all settled on SublimeText - and I quit remapping my keys at the OS
level.
SublimeText worked pretty well for our team and there are a number of things I
really like about it. I use the multiline editing a lot - and code folding can
be pretty convenient. I know emacs can do both of those things, but I never
learned how to set up either of those features in emacs. On the other hand, I
never learned to rewrap paragraphs in Sublime (alt-q in emacs). And if I have to
do any heavy search and replace, I still prefer using the keyboard shortcuts in
emacs to the equivalent replace function in Sublime.
I recently decided it was time for some professional self-improvement and that
configuring and really learning one text editor is one thing I need to do.
VSCode appears to be where all the cool kids are moving these days, so I decided
it was time for me to try it too.
Plugins and Configuration
Emacs Keymap
Since I used emacs without a mouse for literally decades, I use the emacs
keybindings to move around - even in a terminal. Having more of my familiar
keybindings available would make me a lot happier. I never installed them in
Sublime because the point of moving to Sublime was to have a unified editor
experience with my team. Unfortunately that team is no longer together, so no
reason for me not to set up my text editor to suit me and only me!
I initially found Emacs
Keymap
but later found a Reddit that pointed out that plugin had not been updated in a
while. I took Reddit’s advice and did a little comparison shopping. Awesome
Emacs
Keymap
has excellent ratings and recent updates. So let’s learn to use that. Based on
the instructions, I set editor.find.seedSearchStringFromSelection to false. I
also need the command key to work as Alt, so I set
emacs-mcx.useMetaPrefixMacCmd to true.
Nice key bindings:
Cmd-s and C-x C-s BOTH work to save a file!
C-a (and C-e) behave in an interesting way. The first time you use it, it goes
to the start of the visible line; the second time you use it, it goes to the
start of the logical line. Not what I am used to from emacs but super handy!
Things I need to get used to or fix:
VSCode shares the kill ring with the OS (BIG WIN!), however, once you get into
VSCode, paste is NOT Cmd-v, it’s C-y. I like C-y working, but switching
keyboard mappings in mid-operation may break my brain. We’ll have to see if
I get used to it.
I am not sure what emacs-mcx.cursorMoveOnFindWidget is. Sounds like I might
need to adjust it but not sure yet.
In the mean time, I have printed out the cheat sheet for keybindings.
Rewrap
One thing I miss in Sublime Text is the ability to rewrap paragraphs using M-q.
I use that A LOT when writing prose such as documentation or blog posts - or
when I edit a block comment and want to reflow the text. This VSCode extension,
Rewrap, looks
like what I want - it even says “Similar to wrap/fill paragraph in Sublime
(alt+q) Emacs (M-q) or Vim (gq); but more powerful.”. The only wrinkle is that
it’s default keybinding is Option-q; M-q is already bound to VSCode’s
workspace.action.quit. A little searching told me how to remove that
keybinding (find it, right click, and choose “remove keybinding” from the
dropdown). Then I searched for the Rewrap plugin’s keybindings and swapped in my
usual M-q for rewrapping a paragraph.
I also turned on auto wrap (like emacs auto-file mode) and adjusted the
editor.wordWrapColumn to 100 since I think the default (80) is a bit narrow
these days. We’ll see what I think of that. If I don’t like it, I can either set
per file type widths OR try the ‘wrap to rulers’ option.
According to the docs, I should be
able to rewrap code comments without messing with doc comments within the
comment and without messing with the code itself. It will be very interesting to
see if that works.
Spell Right
I am a very poor speller. In emacs, I use ispell. So what can I replace that
with in VSCode? Searching for spellcheckers in extensions sows “Spell Right” is
popular and will use my Mac’s built-in dictionary. That will be super handy
since I can add words like this:
https://www.makeuseof.com/tag/add-remove-words-mac-dictionary/
Sync Settings
Now for my last trick, I would like to have the same setup on my work computer
and on my personal laptop. VSCode allows me to save my settings - using a
private GIST attached to my GitHub account. There are a bunch of docs about how
to do partial syncs and how to resolve conflicts. But since this is a new
install on both computers, I just clicked the menu item for turning sync on,
told GitHub to allow VSCode to store information for me, and now I have matching
settings on both my computers.
Bonus!
There is an in-editor Markdown previewer!! So I can see how this post renders as
I compose it!
Out of the box, Wagtail comes with privacy options for
pages
and files
(anything stored in a Collection, e.g. documents and images). The
options that come built in
include:
accessible to anyone (aka Public)
accessible to people in specific groups
accessible to anyone who can log into your site at all
accessible if they have a password you enter on that form
That would seem like a fairly comprehensive list, but at work, we often
restrict things to anyone coming from specific IP addresses. So when
we rolled out our new CMS, we had requests for it to support the “on
campus” option - just like the old ones had.
Adding the “On Campus” option comes in two parts: adding it to the
options offered to the editor and then enforcing that restriction when
serving pages or documents.
Adding a New Restriction Choice
The privacy options are defined in the BaseViewRestriction class in
wagtail.core.models. We will be deciding if a browser is coming from
on or off campus with a middleware, so we will not have to add any columns
to the tables defined by classes inheriting from BaseViewRestriction.
But we do need to add “on campus” to the options offered to the site
editor.
To override the stock RESTRICTION_CHOICES from wagtail.core.models.BaseViewRestriction
with our own, we need to monkeypatch the class. We have a number of small
customizations in our wagtail site, so we collect them all into their own app,
wagtail_patches, which we include in INSTALLED_APPS.
# In our settings.py
INSTALLED_APPS=[# Our apps...
'www',# The app containing our monkey patches
'wagtail_patches',# Wagtail apps.
'wagtail.embeds','wagtail.sites','wagtail.users','wagtail.snippets','wagtail.documents','wagtail.images','wagtail.search','wagtail.admin','wagtail.core',]
And now the monkeypatching code:
# In wagtail_patches/monkey_patches.py
RESTRICTION_CHOICES=((NONE, \_("Public")),(LOGIN, \_("Private, accessible to logged-in users")),(ON_CAMPUS, \_("Private, accessible to users on campus or on VPN")),(PASSWORD, \_("Private, accessible with the following password")),(GROUPS, \_("Private, accessible to users in specific groups")),)wagtail.core.models.BaseViewRestriction.ON_CAMPUS='on_campus'wagtail.core.models.BaseViewRestriction.RESTRICTION_CHOICES=RESTRICTION_CHOICES
That will add the ON_CAMPUS choice to our form. Since there are no additional
parameters needed for this restriction, you wouldn’t think we would have do
make any additional changes to the form or form validations. But as of Django 3,
we also need to patch the model level validations. We do that like this:
# In wagtail_patches/monkey_patches.py
defpatched_PageViewRestriction_clean_fields(self,exclude=None):"""
Clean all fields and raise a ValidationError containing a dict
of all validation errors if any occur.
"""ifexcludeisNone:exclude=[]errors={}forfinself._meta.fields:# BEGIN PATCH
iff.attname=='restriction_type':f.choices=RESTRICTION_CHOICES# END PATCH
iff.nameinexclude:continue# Skip validation for empty fields with blank=True. The developer
# is responsible for making sure they have a valid value.
raw_value=getattr(self,f.attname)iff.blankandraw_valueinf.empty_values:continuetry:setattr(self,f.attname,f.clean(raw_value,self))exceptValidationErrorase:errors[f.name]=e.error_listiferrors:raiseValidationError(errors)wagtail.core.models.PageViewRestriction.clean_fields=patched_PageViewRestriction_clean_fields
Enforcing Our New Restriction
In our setup, we have split enforcement into two parts, a middleware that
determines if a request is “on campus” or not and then code that uses that
information to show or not show the private page or file. We took this
approach because we already have shared library that does the “on campus”
checking. If you do not need to share the code that checks for on vs off
campus, you may want to put that check directly into the code that enforces
the rule.
On Campus Middleware
Define the following middleware somewhere in your project - customizing it
with your own IP addresses.
classOnCampusMiddleware(MiddlewareMixin):"""
Middleware sets ON_CAMPUS session variable to True if the request
came from an campus IP or if the user is authenticated.
"""CAMPUS_ADDRESSES=[r'192\.168\.\d{1,3}\.\d{1,3}',]defcheck_ip(self,request):client_ip=get_client_ip(request)ifclient_ip:forip_regexinself.CAMPUS_ADDRESSES:ifre.match(ip_regex,client_ip):returnTruereturnFalsedefprocess_request(self,request):# A user is considered "on campus" if they are visiting from a campus IP, or are logged in to the site.
request.session['ON_CAMPUS']=request.user.is_authenticatedorself.check_ip(request)returnNone
Then add this to the MIDDLEWARE list in your Django settings file. Since
this middleware is a silent pass through in both directions (only the side
effect of setting the ON_CAMPUS session variable to True or False matters),
you can put this line anywhere in the list.
Updating the Enforcement Code
The meat of the restriction enforcement is in BaseViewRestriction’s accept_request
method, so we need to add our new on-campus check:
defpatched_accept_request(self,request):ifself.restriction_type==BaseViewRestriction.PASSWORD:passed_restrictions=request.session.get(self.passed_view_restrictions_session_key,[])ifself.idnotinpassed_restrictions:returnFalseelifself.restriction_type==BaseViewRestriction.LOGIN:ifnotrequest.user.is_authenticated:returnFalse# BEGIN PATCH
# Add a privacy mode that allows only on-campus visitors.
elifself.restriction_type==wagtail.core.models.BaseViewRestriction.ON_CAMPUS:ifnotrequest.session['ON_CAMPUS']:returnFalse# END PATCH
elifself.restriction_type==BaseViewRestriction.GROUPS:ifnotrequest.user.is_superuser:current_user_groups=request.user.groups.all()ifnotany(groupincurrent_user_groupsforgroupinself.groups.all()):returnFalsereturnTruewagtail.core.models.BaseViewRestriction.accept_request=patched_accept_request
What should happen when accept_request returns False? That depends on which
restriction triggers the failure. For example, if a user fails the LOGIN restriction,
they should be directed to log in - but if they fail the ON_CAMPUS restriction,
they should get an error message. The correct actions for the built-in
restriction types are handled in a before_serve_page hook called
check_view_restrictions
Since we already have monkey patched some other hooks, what we did was
to monkey patch check_view_restrictions:
# In wagtail_patches/wagtail_hook_patches.py
fromdjango.urlsimportreversefromwagtail.core.hooksimport_hooks,get_hooksfromwagtail.core.modelsimportPageViewRestrictionfromwagtail.core.wagtail_hooksimportrequire_wagtail_logindefpatched_check_view_restrictions(page,request,serve_args,serve_kwargs):"""
Check whether there are any view restrictions on this page which are
not fulfilled by the given request object. If there are, return an
HttpResponse that will notify the user of that restriction (and possibly
include a password / login form that will allow them to proceed). If
there are no such restrictions, return None
"""forrestrictioninpage.get_view_restrictions():ifnotrestriction.accept_request(request):ifrestriction.restriction_type==PageViewRestriction.PASSWORD:fromwagtail.core.formsimportPasswordViewRestrictionFormform=PasswordViewRestrictionForm(instance=restriction,initial={'return_url':request.get_full_path()})action_url=reverse('wagtailcore_authenticate_with_password',args=[restriction.id,page.id])returnpage.serve_password_required_response(request,form,action_url)elifrestriction.restriction_typein[PageViewRestriction.LOGIN,PageViewRestriction.GROUPS]:returnrequire_wagtail_login(next_url=request.get_full_path())# Begin patch: Added a code path for the on_campus restriction.
elifrestriction.restriction_type==PageViewRestriction.ON_CAMPUS:# We set request.is_preview like Page.serve() would have done, since this code bypasses it.
request.is_preview=getattr(request,'is_preview',False)# Render the on_campus_only.html template, instead of the usual page template.
returnTemplateResponse(request,'core/on_campus_only.html',page.get_context(request))# end patch
defpatch_hooks():"""
This function replaces various wagtail hook implementations with our own versions.
"""forndx,_inenumerate(get_hooks('before_serve_page')):func=_hooks['before_serve_page'][ndx][0]iffunc.__module__=='wagtail.core.wagtail_hooks':_hooks['before_serve_page'][ndx]=(patched_check_view_restrictions,0)
But looking at the source code for the page serve view, I don’t think
we need to replace the existing check_view_restrictions. I think we
can just add an additional before_serve_page hook that returns our
“sorry you need to be on campus to see this page” message. If I were
doing this from scratch, I would put the following code into one of my
wagtail_hooks.py files (either in the wagtail_patches app or in that
app that contains most of my page models).
# In wagtail_patches/wagtail_hooks.py
fromwagtail.coreimporthooks@hooks.register('before_serve_page')defenforce_on_campus_restriction(page,request,serve_args,serve_kwargs):ifnotrestriction.accept_request(request):forrestrictioninpage.get_view_restrictions():ifrestriction.restriction_type==PageViewRestriction.ON_CAMPUS:# We set request.is_preview like Page.serve() would have done, since this code bypasses it.
request.is_preview=getattr(request,'is_preview',False)# Render the on_campus_only.html templates
returnTemplateResponse(request,'core/on_campus_only.html',page.get_context(request))
At work, we need to build a scheduling system. We want to present the
user with a list of possible dates - and then the possible slots on
that date. I don’t want to have all the possible empty slots in the
database so I thought I would have to build them procedurally using
Python.
importcalendarfromdatetimeimporttimedeltafrompytzimporttimezoneaspytz_timezoneAVAILABLE_DAYS=['Monday','Wednesday','Friday']AVAILABLE_START_TIME={'hours':8,'minutes':0,'timezone':'UTC'}# start + estimated duration must be earlier than this
AVAILABLE_END_TIME={'hours':20,'minutes':0,'timezone':'UTC'}defpossible_times(start_date,end_date,estimated_duration,granularity=60):'''
Returns a list of times when a user may start a reservation between start_date and end_date (inclusive)
By default reservations may start hourly from AVAILABLE_START_TIME onwards;
you may adjust how frequently reservations may start by setting the 'granularity' (in minutes)
'''possibles=[]date=_first_slot(start_date)whiledate<=end_date:ifnot_is_possible_day(date):# skip this day
date+=timedelta(days=1)continue# find slots on day
last_slot=_last_slot(date,estimated_duration)whiledate<=last_slot:possibles.append(date)date+=timedelta(minutes=granularity)# go to next day
date=_first_slot(date+timedelta(days=1))returnpossibles# ############## helper methods #######################
def_is_possible_day(date,available_days=None):ifnotavailable_days:available_days=AVAILABLE_DAYSreturncalendar.day_name[date.weekday()]inavailable_daysdef_first_slot(date,start_time=None):'''Returns the first slot of the day'''ifnotstart_time:start_time=AVAILABLE_START_TIMEfirst_slot=date.replace(hour=start_time['hours'],minute=start_time['minutes'],tzinfo=pytz_timezone(start_time['timezone']))returnfirst_slotdef_last_slot(date,duration,end_time=None):ifnotend_time:end_time=AVAILABLE_END_TIMElast_slot=date.replace(hour=end_time['hours'],minute=end_time['minutes'],tzinfo=pytz_timezone(end_time['timezone']))last_slot-=durationreturnlast_slot
The code above loops over the days in the range - and then on
available days, loops over the hours in that day and returns a list of
datetimes. There is a lot of ugly adding of Python timedelta objects
and resetting the time to start iterating on a new day. It works - but
the next step, eliminating slots that are already full, is going to be
even uglier - lots of tedious “does this interval overlap with
existing scheduled events”.
fromdatetimeimportdatetime,timezone,timedeltafromdjango.testimportTestCasefrom..utilsimportpossible_timesclassReservationUtilsTests(TestCase):# ############### integration tests ##########################
deftest_no_possible_times_on_sunday(self):start_date=datetime(2017,4,30,hour=8,minute=0,tzinfo=timezone.utc)end_date=datetime(2017,4,30,hour=23,minute=0,tzinfo=timezone.utc)duration=60# in minutes
slots=possible_times(start_date,end_date,duration)self.assertEqual(slots,[])deftest_hourly_possible_times_end_earlier_if_takes_longer(self):start_date=datetime(2017,4,30,hour=8,minute=0,tzinfo=timezone.utc)end_date=datetime(2017,5,2,hour=23,minute=0,tzinfo=timezone.utc)expected=[datetime(2017,5,1,8,0,tzinfo=timezone.utc),datetime(2017,5,1,9,0,tzinfo=timezone.utc),datetime(2017,5,1,10,0,tzinfo=timezone.utc),datetime(2017,5,1,11,0,tzinfo=timezone.utc),datetime(2017,5,1,12,0,tzinfo=timezone.utc),datetime(2017,5,1,13,0,tzinfo=timezone.utc),datetime(2017,5,1,14,0,tzinfo=timezone.utc),datetime(2017,5,1,15,0,tzinfo=timezone.utc),datetime(2017,5,1,16,0,tzinfo=timezone.utc),datetime(2017,5,1,17,0,tzinfo=timezone.utc),datetime(2017,5,1,18,0,tzinfo=timezone.utc),datetime(2017,5,1,19,0,tzinfo=timezone.utc),]slots=possible_times(start_date,end_date,estimated_duration=timedelta(minutes=60))self.assertEqual(len(slots),12)self.assertEqual(slots,expected)slots2=possible_times(start_date,end_date,estimated_duration=timedelta(minutes=120))self.assertEqual(len(slots2),11)self.assertEqual(slots2,expected[0:11])
When I started looking into how to check the overlap, I started to
looking into checking overlaps in the database - and found that a)
Postgres has a date range data type (tstzrange), b) Django’s Postgres
extensions has a field that wraps the Postgres tstzrange field
(DateTimeRangeField), and c) the Postgres docs even have
an example
of how to create indexes that prevent you from scheduling more than
one person to occupy a specific room at one time. All that ugly
python, turns into:
importcalendarfromdjango.dbimportconnectionAVAILABLE_DAYS=['Monday','Wednesday','Friday']AVAILABLE_START_TIME='08:00'# start + estimated duruation must be earlier than this
AVAILABLE_END_TIME='20:00'defpossible_times(start_date,end_date,estimated_duration,granularity=60):allowed_days=[list(calendar.day_name).index(day)+1fordayinAVAILABLE_DAYS]cursor=connection.cursor()sql='''
SELECT *
FROM generate_series (timestamp %(start_date)s
, timestamp %(end_date)s - interval '%(duration)s minutes'
, interval '%(granularity)sm') h
WHERE EXTRACT(ISODOW FROM h) in %(allowed_days)s
AND h::time >= %(start_time)s
AND h::time <= %(end_time)s - interval '%(duration)s minutes'
;
'''cursor.execute(sql,{'start_date':start_date,'start_time':AVAILABLE_START_TIME,'end_date':end_date,'end_time':AVAILABLE_END_TIME,'duration':estimated_duration,'granularity':granularity,'allowed_days':tuple(allowed_days),})slots=[row[0]forrowincursor.fetchall()]returnslots
The only slightly tricky part of that was restricting allowed days to
MWF. I want my constant to use the day names, not the integers
Postgres uses for days of the week. So I needed to import Python’s
calendar module to convert “Monday” to an integer. Python uses 0
for Monday, but Postgres thinks Monday is 1, so add 1. Then it took me
a little while to figure out how to pass a list into the query in a
way that everything is properly interpolated and quoted; the trick:
tuple(allowed_days).
Now I just need to join to my reservations table to exclude slots
where the schedule is already full.