mirror of
https://github.com/krateng/maloja.git
synced 2023-08-10 21:12:55 +03:00
Compare commits
15 Commits
3b156a73ff
...
c77b7c952f
Author | SHA1 | Date | |
---|---|---|---|
|
c77b7c952f | ||
|
8a44d3def2 | ||
|
cf04583122 | ||
|
8845f931df | ||
|
9c6c91f594 | ||
|
2c31df3c58 | ||
|
9c656ee90b | ||
|
938947d06c | ||
|
ac3ca0b5e9 | ||
|
64d4036f55 | ||
|
6df363a763 | ||
|
7062c0b440 | ||
|
ad50ee866c | ||
|
62abc31930 | ||
|
c55e12dd43 |
30
API.md
30
API.md
|
@ -1,6 +1,7 @@
|
||||||
# Scrobbling
|
# Scrobbling
|
||||||
|
|
||||||
In order to scrobble from a wide selection of clients, you can use Maloja's standard-compliant APIs with the following settings:
|
Scrobbling can be done with the native API, see [below](#submitting-a-scrobble).
|
||||||
|
In order to scrobble from a wide selection of clients, you can also use Maloja's standard-compliant APIs with the following settings:
|
||||||
|
|
||||||
GNU FM |
|
GNU FM |
|
||||||
------ | ---------
|
------ | ---------
|
||||||
|
@ -41,7 +42,7 @@ The user starts playing '(Fine Layers of) Slaysenflite', which is exactly 3:00 m
|
||||||
* If the user ends the play after 1:22, no scrobble is submitted
|
* If the user ends the play after 1:22, no scrobble is submitted
|
||||||
* If the user ends the play after 2:06, a scrobble with `"duration":126` is submitted
|
* If the user ends the play after 2:06, a scrobble with `"duration":126` is submitted
|
||||||
* If the user jumps back several times and ends the play after 3:57, a scrobble with `"duration":237` is submitted
|
* If the user jumps back several times and ends the play after 3:57, a scrobble with `"duration":237` is submitted
|
||||||
* If the user jumps back several times and ends the play after 4:49, two scrobbles with `"duration":180` and `"duration":109` should be submitted
|
* If the user jumps back several times and ends the play after 4:49, two scrobbles with `"duration":180` and `"duration":109` are submitted
|
||||||
|
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<table>
|
<table>
|
||||||
|
@ -54,11 +55,26 @@ The native Maloja API is reachable at `/apis/mlj_1`. Endpoints are listed on `/a
|
||||||
All endpoints return JSON data. POST request can be made with query string or form data arguments, but this is discouraged - JSON should be used whenever possible.
|
All endpoints return JSON data. POST request can be made with query string or form data arguments, but this is discouraged - JSON should be used whenever possible.
|
||||||
|
|
||||||
No application should ever rely on the non-existence of fields in the JSON data - i.e., additional fields can be added at any time without this being considered a breaking change. Existing fields should usually not be removed or changed, but it is always a good idea to add basic handling for missing fields.
|
No application should ever rely on the non-existence of fields in the JSON data - i.e., additional fields can be added at any time without this being considered a breaking change. Existing fields should usually not be removed or changed, but it is always a good idea to add basic handling for missing fields.
|
||||||
|
|
||||||
|
## Submitting a Scrobble
|
||||||
|
|
||||||
|
The POST endpoint `/newscrobble` is used to submit new scrobbles. These use a flat JSON structure with the following fields:
|
||||||
|
|
||||||
|
| Key | Type | Description |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `artists` | List(String) | Track artists |
|
||||||
|
| `title` | String | Track title |
|
||||||
|
| `album` | String | Name of the album (Optional) |
|
||||||
|
| `albumartists` | List(String) | Album artists (Optional) |
|
||||||
|
| `duration` | Integer | How long the song was listened to in seconds (Optional) |
|
||||||
|
| `length` | Integer | Actual length of the full song in seconds (Optional) |
|
||||||
|
| `time` | Integer | Timestamp of the listen if it was not at the time of submitting (Optional) |
|
||||||
|
| `nofix` | Boolean | Skip server-side metadata fixing (Optional) |
|
||||||
|
|
||||||
## General Structure
|
## General Structure
|
||||||
|
|
||||||
|
The API is not fully consistent in order to ensure backwards-compatibility. Refer to the individual endpoints.
|
||||||
Most endpoints follow this structure:
|
Generally, most endpoints follow this structure:
|
||||||
|
|
||||||
| Key | Type | Description |
|
| Key | Type | Description |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
|
@ -66,7 +82,7 @@ Most endpoints follow this structure:
|
||||||
| `error` | Mapping | Details about the error if one occured. |
|
| `error` | Mapping | Details about the error if one occured. |
|
||||||
| `warnings` | List | Any warnings that did not result in failure, but should be noted. Field is omitted if there are no warnings! |
|
| `warnings` | List | Any warnings that did not result in failure, but should be noted. Field is omitted if there are no warnings! |
|
||||||
| `desc` | String | Human-readable feedback. This can be shown directly to the user if desired. |
|
| `desc` | String | Human-readable feedback. This can be shown directly to the user if desired. |
|
||||||
| `list` | List | List of returned [entities](#Entity-Structure) |
|
| `list` | List | List of returned [entities](#entity-structure) |
|
||||||
|
|
||||||
|
|
||||||
Both errors and warnings have the following structure:
|
Both errors and warnings have the following structure:
|
||||||
|
@ -87,7 +103,7 @@ Whenever a list of entities is returned, they have the following fields:
|
||||||
| Key | Type | Description |
|
| Key | Type | Description |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `time` | Integer | Timestamp of the Scrobble in UTC |
|
| `time` | Integer | Timestamp of the Scrobble in UTC |
|
||||||
| `track` | Mapping | The [track](#Track) being scrobbled |
|
| `track` | Mapping | The [track](#track) being scrobbled |
|
||||||
| `duration` | Integer | How long the track was played for in seconds |
|
| `duration` | Integer | How long the track was played for in seconds |
|
||||||
| `origin` | String | Client that submitted the scrobble, or import source |
|
| `origin` | String | Client that submitted the scrobble, or import source |
|
||||||
|
|
||||||
|
@ -118,7 +134,7 @@ Whenever a list of entities is returned, they have the following fields:
|
||||||
|
|
||||||
| Key | Type | Description |
|
| Key | Type | Description |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `artists` | List | The [artists](#Artist) credited with the track |
|
| `artists` | List | The [artists](#artist) credited with the track |
|
||||||
| `title` | String | The title of the track |
|
| `title` | String | The title of the track |
|
||||||
| `length` | Integer | The full length of the track in seconds |
|
| `length` | Integer | The full length of the track in seconds |
|
||||||
|
|
||||||
|
|
|
@ -42,3 +42,10 @@ minor_release_name: "Yeonhee"
|
||||||
- "[Bugfix] Fixed importing a Spotify file without path"
|
- "[Bugfix] Fixed importing a Spotify file without path"
|
||||||
- "[Bugfix] No longer releasing database lock during scrobble creation"
|
- "[Bugfix] No longer releasing database lock during scrobble creation"
|
||||||
- "[Distribution] Experimental arm64 image"
|
- "[Distribution] Experimental arm64 image"
|
||||||
|
3.0.7:
|
||||||
|
commit: "62abc319303a6cb6463f7c27b6ef09b76fc67f86"
|
||||||
|
notes:
|
||||||
|
- "[Bugix] Improved signal handling"
|
||||||
|
- "[Bugix] Fixed constant re-caching of all-time stats, significantly increasing page load speed"
|
||||||
|
- "[Logging] Disabled cache information when cache is not used"
|
||||||
|
- "[Distribution] Experimental arm/v7 image"
|
||||||
|
|
|
@ -6,3 +6,5 @@ minor_release_name: "Soyeon"
|
||||||
- "[Feature] Implemented track title and artist name editing from web interface"
|
- "[Feature] Implemented track title and artist name editing from web interface"
|
||||||
- "[Feature] Implemented track and artist merging from web interface"
|
- "[Feature] Implemented track and artist merging from web interface"
|
||||||
- "[Feature] Implemented scrobble reparsing from web interface"
|
- "[Feature] Implemented scrobble reparsing from web interface"
|
||||||
|
- "[Performance] Adjusted cache sizes"
|
||||||
|
- "[Logging] Added cache memory use information"
|
||||||
|
|
|
@ -6,6 +6,7 @@ FOLDER = "dev/releases"
|
||||||
|
|
||||||
releases = {}
|
releases = {}
|
||||||
for f in os.listdir(FOLDER):
|
for f in os.listdir(FOLDER):
|
||||||
|
if f == "branch.yml": continue
|
||||||
#maj,min = (int(i) for i in f.split('.')[:2])
|
#maj,min = (int(i) for i in f.split('.')[:2])
|
||||||
|
|
||||||
with open(os.path.join(FOLDER,f)) as fd:
|
with open(os.path.join(FOLDER,f)) as fd:
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
import os
|
import os
|
||||||
import signal
|
import signal
|
||||||
import subprocess
|
import subprocess
|
||||||
|
import time
|
||||||
|
|
||||||
from setproctitle import setproctitle
|
from setproctitle import setproctitle
|
||||||
from ipaddress import ip_address
|
from ipaddress import ip_address
|
||||||
|
@ -40,9 +41,10 @@ def get_instance_supervisor():
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def restart():
|
def restart():
|
||||||
stop()
|
if stop():
|
||||||
start()
|
start()
|
||||||
|
else:
|
||||||
|
print(col["red"]("Could not stop Maloja!"))
|
||||||
|
|
||||||
def start():
|
def start():
|
||||||
if get_instance_supervisor() is not None:
|
if get_instance_supervisor() is not None:
|
||||||
|
@ -69,16 +71,28 @@ def start():
|
||||||
|
|
||||||
def stop():
|
def stop():
|
||||||
|
|
||||||
pid_sv = get_instance_supervisor()
|
for attempt in [(signal.SIGTERM,2),(signal.SIGTERM,5),(signal.SIGKILL,3),(signal.SIGKILL,5)]:
|
||||||
if pid_sv is not None:
|
|
||||||
os.kill(pid_sv,signal.SIGTERM)
|
pid_sv = get_instance_supervisor()
|
||||||
|
pid = get_instance()
|
||||||
|
|
||||||
|
if pid is None and pid_sv is None:
|
||||||
|
print("Maloja stopped!")
|
||||||
|
return True
|
||||||
|
|
||||||
|
if pid_sv is not None:
|
||||||
|
os.kill(pid_sv,attempt[0])
|
||||||
|
if pid is not None:
|
||||||
|
os.kill(pid,attempt[0])
|
||||||
|
|
||||||
|
time.sleep(attempt[1])
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
pid = get_instance()
|
|
||||||
if pid is not None:
|
|
||||||
os.kill(pid,signal.SIGTERM)
|
|
||||||
|
|
||||||
if pid is None and pid_sv is None:
|
|
||||||
return False
|
|
||||||
|
|
||||||
print("Maloja stopped!")
|
print("Maloja stopped!")
|
||||||
return True
|
return True
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
# you know what f*ck it
|
# you know what f*ck it
|
||||||
# this is hardcoded for now because of that damn project / package name discrepancy
|
# this is hardcoded for now because of that damn project / package name discrepancy
|
||||||
# i'll fix it one day
|
# i'll fix it one day
|
||||||
VERSION = "3.0.6"
|
VERSION = "3.0.7"
|
||||||
HOMEPAGE = "https://github.com/krateng/maloja"
|
HOMEPAGE = "https://github.com/krateng/maloja"
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,7 @@ api.__apipath__ = "mlj_1"
|
||||||
|
|
||||||
|
|
||||||
errors = {
|
errors = {
|
||||||
database.MissingScrobbleParameters: lambda e: (400,{
|
database.exceptions.MissingScrobbleParameters: lambda e: (400,{
|
||||||
"status":"failure",
|
"status":"failure",
|
||||||
"error":{
|
"error":{
|
||||||
'type':'missing_scrobble_data',
|
'type':'missing_scrobble_data',
|
||||||
|
@ -48,6 +48,14 @@ errors = {
|
||||||
'desc':"The scrobble is missing needed parameters."
|
'desc':"The scrobble is missing needed parameters."
|
||||||
}
|
}
|
||||||
}),
|
}),
|
||||||
|
database.exceptions.MissingEntityParameter: lambda e: (400,{
|
||||||
|
"status":"error",
|
||||||
|
"error":{
|
||||||
|
'type':'missing_entity_parameter',
|
||||||
|
'value':None,
|
||||||
|
'desc':"This API call is not valid without an entity (track or artist)."
|
||||||
|
}
|
||||||
|
}),
|
||||||
database.exceptions.EntityExists: lambda e: (409,{
|
database.exceptions.EntityExists: lambda e: (409,{
|
||||||
"status":"failure",
|
"status":"failure",
|
||||||
"error":{
|
"error":{
|
||||||
|
@ -56,7 +64,16 @@ errors = {
|
||||||
'desc':"This entity already exists in the database. Consider merging instead."
|
'desc':"This entity already exists in the database. Consider merging instead."
|
||||||
}
|
}
|
||||||
}),
|
}),
|
||||||
Exception: lambda e: (500,{
|
database.exceptions.DatabaseNotBuilt: lambda e: (503,{
|
||||||
|
"status":"error",
|
||||||
|
"error":{
|
||||||
|
'type':'server_not_ready',
|
||||||
|
'value':'db_upgrade',
|
||||||
|
'desc':"The database is being upgraded. Please try again later."
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
# for http errors, use their status code
|
||||||
|
Exception: lambda e: ((e.status_code if hasattr(e,'statuscode') else 500),{
|
||||||
"status":"failure",
|
"status":"failure",
|
||||||
"error":{
|
"error":{
|
||||||
'type':'unknown_error',
|
'type':'unknown_error',
|
||||||
|
@ -185,6 +202,7 @@ def get_scrobbles_external(**keys):
|
||||||
if k_amount.get('perpage') is not math.inf: result = result[:k_amount.get('perpage')]
|
if k_amount.get('perpage') is not math.inf: result = result[:k_amount.get('perpage')]
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":result
|
"list":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -204,6 +222,7 @@ def get_scrobbles_num_external(**keys):
|
||||||
result = database.get_scrobbles_num(**ckeys)
|
result = database.get_scrobbles_num(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"amount":result
|
"amount":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -224,6 +243,7 @@ def get_tracks_external(**keys):
|
||||||
result = database.get_tracks(**ckeys)
|
result = database.get_tracks(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":result
|
"list":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -240,6 +260,7 @@ def get_artists_external():
|
||||||
result = database.get_artists()
|
result = database.get_artists()
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":result
|
"list":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -261,6 +282,7 @@ def get_charts_artists_external(**keys):
|
||||||
result = database.get_charts_artists(**ckeys)
|
result = database.get_charts_artists(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":result
|
"list":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -280,6 +302,7 @@ def get_charts_tracks_external(**keys):
|
||||||
result = database.get_charts_tracks(**ckeys)
|
result = database.get_charts_tracks(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":result
|
"list":result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -300,6 +323,7 @@ def get_pulse_external(**keys):
|
||||||
results = database.get_pulse(**ckeys)
|
results = database.get_pulse(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":results
|
"list":results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -320,6 +344,7 @@ def get_performance_external(**keys):
|
||||||
results = database.get_performance(**ckeys)
|
results = database.get_performance(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":results
|
"list":results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -340,6 +365,7 @@ def get_top_artists_external(**keys):
|
||||||
results = database.get_top_artists(**ckeys)
|
results = database.get_top_artists(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":results
|
"list":results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -362,6 +388,7 @@ def get_top_tracks_external(**keys):
|
||||||
results = database.get_top_tracks(**ckeys)
|
results = database.get_top_tracks(**ckeys)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
"status":"ok",
|
||||||
"list":results
|
"list":results
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -386,7 +413,7 @@ def artist_info_external(**keys):
|
||||||
@api.get("trackinfo")
|
@api.get("trackinfo")
|
||||||
@catch_exceptions
|
@catch_exceptions
|
||||||
@add_common_args_to_docstring(filterkeys=True)
|
@add_common_args_to_docstring(filterkeys=True)
|
||||||
def track_info_external(artist:Multi[str],**keys):
|
def track_info_external(artist:Multi[str]=[],**keys):
|
||||||
"""Returns information about a track
|
"""Returns information about a track
|
||||||
|
|
||||||
:return: track (Mapping), scrobbles (Integer), position (Integer), medals (Mapping), certification (String), topweeks (Integer)
|
:return: track (Mapping), scrobbles (Integer), position (Integer), medals (Mapping), certification (String), topweeks (Integer)
|
||||||
|
@ -691,7 +718,8 @@ def reparse_scrobble(timestamp):
|
||||||
if result:
|
if result:
|
||||||
return {
|
return {
|
||||||
"status":"success",
|
"status":"success",
|
||||||
"desc":f"Scrobble was reparsed!"
|
"desc":f"Scrobble was reparsed!",
|
||||||
|
"scrobble":result
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
return {
|
return {
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# server
|
# server
|
||||||
from bottle import request, response, FormsDict, HTTPError
|
from bottle import request, response, FormsDict
|
||||||
|
|
||||||
# rest of the project
|
# rest of the project
|
||||||
from ..cleanup import CleanerAgent
|
from ..cleanup import CleanerAgent
|
||||||
|
@ -13,6 +13,7 @@ from ..apis import apikeystore
|
||||||
from . import sqldb
|
from . import sqldb
|
||||||
from . import cached
|
from . import cached
|
||||||
from . import dbcache
|
from . import dbcache
|
||||||
|
from . import exceptions
|
||||||
|
|
||||||
# doreah toolkit
|
# doreah toolkit
|
||||||
from doreah.logging import log
|
from doreah.logging import log
|
||||||
|
@ -42,23 +43,12 @@ dbstatus = {
|
||||||
"rebuildinprogress":False,
|
"rebuildinprogress":False,
|
||||||
"complete":False # information is complete
|
"complete":False # information is complete
|
||||||
}
|
}
|
||||||
class DatabaseNotBuilt(HTTPError):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__(
|
|
||||||
status=503,
|
|
||||||
body="The Maloja Database is being upgraded to Version 3. This could take quite a long time! (~ 2-5 minutes per 10 000 scrobbles)",
|
|
||||||
headers={"Retry-After":120}
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class MissingScrobbleParameters(Exception):
|
|
||||||
def __init__(self,params=[]):
|
|
||||||
self.params = params
|
|
||||||
|
|
||||||
|
|
||||||
def waitfordb(func):
|
def waitfordb(func):
|
||||||
def newfunc(*args,**kwargs):
|
def newfunc(*args,**kwargs):
|
||||||
if not dbstatus['healthy']: raise DatabaseNotBuilt()
|
if not dbstatus['healthy']: raise exceptions.DatabaseNotBuilt()
|
||||||
return func(*args,**kwargs)
|
return func(*args,**kwargs)
|
||||||
return newfunc
|
return newfunc
|
||||||
|
|
||||||
|
@ -97,7 +87,7 @@ def incoming_scrobble(rawscrobble,fix=True,client=None,api=None,dbconn=None):
|
||||||
missing.append(necessary_arg)
|
missing.append(necessary_arg)
|
||||||
if len(missing) > 0:
|
if len(missing) > 0:
|
||||||
log(f"Invalid Scrobble [Client: {client} | API: {api}]: {rawscrobble} ",color='red')
|
log(f"Invalid Scrobble [Client: {client} | API: {api}]: {rawscrobble} ",color='red')
|
||||||
raise MissingScrobbleParameters(missing)
|
raise exceptions.MissingScrobbleParameters(missing)
|
||||||
|
|
||||||
|
|
||||||
log(f"Incoming scrobble [Client: {client} | API: {api}]: {rawscrobble}")
|
log(f"Incoming scrobble [Client: {client} | API: {api}]: {rawscrobble}")
|
||||||
|
@ -128,7 +118,9 @@ def reparse_scrobble(timestamp):
|
||||||
# check if id changed
|
# check if id changed
|
||||||
if sqldb.get_track_id(scrobble['track']) != track_id:
|
if sqldb.get_track_id(scrobble['track']) != track_id:
|
||||||
sqldb.edit_scrobble(timestamp, {'track':newscrobble['track']})
|
sqldb.edit_scrobble(timestamp, {'track':newscrobble['track']})
|
||||||
return True
|
dbcache.invalidate_entity_cache()
|
||||||
|
dbcache.invalidate_caches()
|
||||||
|
return sqldb.get_scrobble(timestamp=timestamp)
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -199,6 +191,7 @@ def merge_artists(target_id,source_ids):
|
||||||
log(f"Merging {sources} into {target}")
|
log(f"Merging {sources} into {target}")
|
||||||
result = sqldb.merge_artists(target_id,source_ids)
|
result = sqldb.merge_artists(target_id,source_ids)
|
||||||
dbcache.invalidate_entity_cache()
|
dbcache.invalidate_entity_cache()
|
||||||
|
dbcache.invalidate_caches()
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
@ -209,6 +202,7 @@ def merge_tracks(target_id,source_ids):
|
||||||
log(f"Merging {sources} into {target}")
|
log(f"Merging {sources} into {target}")
|
||||||
result = sqldb.merge_tracks(target_id,source_ids)
|
result = sqldb.merge_tracks(target_id,source_ids)
|
||||||
dbcache.invalidate_entity_cache()
|
dbcache.invalidate_entity_cache()
|
||||||
|
dbcache.invalidate_caches()
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
@ -305,6 +299,8 @@ def get_performance(dbconn=None,**keys):
|
||||||
if c["artist"] == artist:
|
if c["artist"] == artist:
|
||||||
rank = c["rank"]
|
rank = c["rank"]
|
||||||
break
|
break
|
||||||
|
else:
|
||||||
|
raise exceptions.MissingEntityParameter()
|
||||||
results.append({"range":rng,"rank":rank})
|
results.append({"range":rng,"rank":rank})
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
@ -344,6 +340,7 @@ def get_top_tracks(dbconn=None,**keys):
|
||||||
def artist_info(dbconn=None,**keys):
|
def artist_info(dbconn=None,**keys):
|
||||||
|
|
||||||
artist = keys.get('artist')
|
artist = keys.get('artist')
|
||||||
|
if artist is None: raise exceptions.MissingEntityParameter()
|
||||||
|
|
||||||
artist_id = sqldb.get_artist_id(artist,dbconn=dbconn)
|
artist_id = sqldb.get_artist_id(artist,dbconn=dbconn)
|
||||||
artist = sqldb.get_artist(artist_id,dbconn=dbconn)
|
artist = sqldb.get_artist(artist_id,dbconn=dbconn)
|
||||||
|
@ -388,6 +385,7 @@ def artist_info(dbconn=None,**keys):
|
||||||
def track_info(dbconn=None,**keys):
|
def track_info(dbconn=None,**keys):
|
||||||
|
|
||||||
track = keys.get('track')
|
track = keys.get('track')
|
||||||
|
if track is None: raise exceptions.MissingEntityParameter()
|
||||||
|
|
||||||
track_id = sqldb.get_track_id(track,dbconn=dbconn)
|
track_id = sqldb.get_track_id(track,dbconn=dbconn)
|
||||||
track = sqldb.get_track(track_id,dbconn=dbconn)
|
track = sqldb.get_track(track_id,dbconn=dbconn)
|
||||||
|
|
|
@ -5,6 +5,7 @@
|
||||||
import lru
|
import lru
|
||||||
import psutil
|
import psutil
|
||||||
import json
|
import json
|
||||||
|
import sys
|
||||||
from doreah.regular import runhourly
|
from doreah.regular import runhourly
|
||||||
from doreah.logging import log
|
from doreah.logging import log
|
||||||
|
|
||||||
|
@ -12,16 +13,10 @@ from ..pkg_global.conf import malojaconfig
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if malojaconfig['USE_GLOBAL_CACHE']:
|
if malojaconfig['USE_GLOBAL_CACHE']:
|
||||||
CACHE_SIZE = 1000
|
|
||||||
ENTITY_CACHE_SIZE = 100000
|
|
||||||
|
|
||||||
cache = lru.LRU(CACHE_SIZE)
|
cache = lru.LRU(10000)
|
||||||
entitycache = lru.LRU(ENTITY_CACHE_SIZE)
|
entitycache = lru.LRU(100000)
|
||||||
|
|
||||||
hits, misses = 0, 0
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -31,11 +26,10 @@ if malojaconfig['USE_GLOBAL_CACHE']:
|
||||||
trim_cache()
|
trim_cache()
|
||||||
|
|
||||||
def print_stats():
|
def print_stats():
|
||||||
log(f"Cache Size: {len(cache)} [{len(entitycache)} E], System RAM Utilization: {psutil.virtual_memory().percent}%, Cache Hits: {hits}/{hits+misses}")
|
for name,c in (('Cache',cache),('Entity Cache',entitycache)):
|
||||||
#print("Full rundown:")
|
hits, misses = c.get_stats()
|
||||||
#import sys
|
log(f"{name}: Size: {len(c)} | Hits: {hits}/{hits+misses} | Estimated Memory: {human_readable_size(c)}")
|
||||||
#for k in cache.keys():
|
log(f"System RAM Utilization: {psutil.virtual_memory().percent}%")
|
||||||
# print(f"\t{k}\t{sys.getsizeof(cache[k])}")
|
|
||||||
|
|
||||||
|
|
||||||
def cached_wrapper(inner_func):
|
def cached_wrapper(inner_func):
|
||||||
|
@ -49,12 +43,9 @@ if malojaconfig['USE_GLOBAL_CACHE']:
|
||||||
global hits, misses
|
global hits, misses
|
||||||
key = (serialize(args),serialize(kwargs), inner_func, kwargs.get("since"), kwargs.get("to"))
|
key = (serialize(args),serialize(kwargs), inner_func, kwargs.get("since"), kwargs.get("to"))
|
||||||
|
|
||||||
if key in cache:
|
try:
|
||||||
hits += 1
|
return cache[key]
|
||||||
return cache.get(key)
|
except KeyError:
|
||||||
|
|
||||||
else:
|
|
||||||
misses += 1
|
|
||||||
result = inner_func(*args,**kwargs,dbconn=conn)
|
result = inner_func(*args,**kwargs,dbconn=conn)
|
||||||
cache[key] = result
|
cache[key] = result
|
||||||
return result
|
return result
|
||||||
|
@ -67,25 +58,18 @@ if malojaconfig['USE_GLOBAL_CACHE']:
|
||||||
# cache that's aware of what we're calling
|
# cache that's aware of what we're calling
|
||||||
def cached_wrapper_individual(inner_func):
|
def cached_wrapper_individual(inner_func):
|
||||||
|
|
||||||
|
|
||||||
def outer_func(set_arg,**kwargs):
|
def outer_func(set_arg,**kwargs):
|
||||||
|
|
||||||
|
|
||||||
if 'dbconn' in kwargs:
|
if 'dbconn' in kwargs:
|
||||||
conn = kwargs.pop('dbconn')
|
conn = kwargs.pop('dbconn')
|
||||||
else:
|
else:
|
||||||
conn = None
|
conn = None
|
||||||
|
|
||||||
#global hits, misses
|
|
||||||
result = {}
|
result = {}
|
||||||
for id in set_arg:
|
for id in set_arg:
|
||||||
if (inner_func,id) in entitycache:
|
try:
|
||||||
result[id] = entitycache[(inner_func,id)]
|
result[id] = entitycache[(inner_func,id)]
|
||||||
#hits += 1
|
except KeyError:
|
||||||
else:
|
|
||||||
pass
|
pass
|
||||||
#misses += 1
|
|
||||||
|
|
||||||
|
|
||||||
remaining = inner_func(set(e for e in set_arg if e not in result),dbconn=conn)
|
remaining = inner_func(set(e for e in set_arg if e not in result),dbconn=conn)
|
||||||
for id in remaining:
|
for id in remaining:
|
||||||
|
@ -115,13 +99,14 @@ if malojaconfig['USE_GLOBAL_CACHE']:
|
||||||
def trim_cache():
|
def trim_cache():
|
||||||
ramprct = psutil.virtual_memory().percent
|
ramprct = psutil.virtual_memory().percent
|
||||||
if ramprct > malojaconfig["DB_MAX_MEMORY"]:
|
if ramprct > malojaconfig["DB_MAX_MEMORY"]:
|
||||||
log(f"{ramprct}% RAM usage, clearing cache and adjusting size!")
|
log(f"{ramprct}% RAM usage, clearing cache!")
|
||||||
|
for c in (cache,entitycache):
|
||||||
|
c.clear()
|
||||||
#ratio = 0.6
|
#ratio = 0.6
|
||||||
#targetsize = max(int(len(cache) * ratio),50)
|
#targetsize = max(int(len(cache) * ratio),50)
|
||||||
#log(f"Reducing to {targetsize} entries")
|
#log(f"Reducing to {targetsize} entries")
|
||||||
#cache.set_size(targetsize)
|
#cache.set_size(targetsize)
|
||||||
#cache.set_size(HIGH_NUMBER)
|
#cache.set_size(HIGH_NUMBER)
|
||||||
cache.clear()
|
|
||||||
#if cache.get_size() > CACHE_ADJUST_STEP:
|
#if cache.get_size() > CACHE_ADJUST_STEP:
|
||||||
# cache.set_size(cache.get_size() - CACHE_ADJUST_STEP)
|
# cache.set_size(cache.get_size() - CACHE_ADJUST_STEP)
|
||||||
|
|
||||||
|
@ -156,3 +141,32 @@ def serialize(obj):
|
||||||
elif isinstance(obj,dict):
|
elif isinstance(obj,dict):
|
||||||
return "{" + ",".join(serialize(o) + ":" + serialize(obj[o]) for o in obj) + "}"
|
return "{" + ",".join(serialize(o) + ":" + serialize(obj[o]) for o in obj) + "}"
|
||||||
return json.dumps(obj.hashable())
|
return json.dumps(obj.hashable())
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_size_of(obj,counted=None):
|
||||||
|
if counted is None:
|
||||||
|
counted = set()
|
||||||
|
if id(obj) in counted: return 0
|
||||||
|
size = sys.getsizeof(obj)
|
||||||
|
counted.add(id(obj))
|
||||||
|
try:
|
||||||
|
for k,v in obj.items():
|
||||||
|
size += get_size_of(v,counted=counted)
|
||||||
|
except:
|
||||||
|
try:
|
||||||
|
for i in obj:
|
||||||
|
size += get_size_of(i,counted=counted)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return size
|
||||||
|
|
||||||
|
def human_readable_size(obj):
|
||||||
|
units = ['','K','M','G','T','P']
|
||||||
|
idx = 0
|
||||||
|
bytes = get_size_of(obj)
|
||||||
|
while bytes > 1024 and len(units) > idx+1:
|
||||||
|
bytes = bytes / 1024
|
||||||
|
idx += 1
|
||||||
|
|
||||||
|
return f"{bytes:.2f} {units[idx]}B"
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
from bottle import HTTPError
|
||||||
|
|
||||||
class EntityExists(Exception):
|
class EntityExists(Exception):
|
||||||
def __init__(self,entitydict):
|
def __init__(self,entitydict):
|
||||||
self.entitydict = entitydict
|
self.entitydict = entitydict
|
||||||
|
@ -8,3 +10,20 @@ class TrackExists(EntityExists):
|
||||||
|
|
||||||
class ArtistExists(EntityExists):
|
class ArtistExists(EntityExists):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class DatabaseNotBuilt(HTTPError):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__(
|
||||||
|
status=503,
|
||||||
|
body="The Maloja Database is being upgraded to Version 3. This could take quite a long time! (~ 2-5 minutes per 10 000 scrobbles)",
|
||||||
|
headers={"Retry-After":120}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MissingScrobbleParameters(Exception):
|
||||||
|
def __init__(self,params=[]):
|
||||||
|
self.params = params
|
||||||
|
|
||||||
|
class MissingEntityParameter(Exception):
|
||||||
|
pass
|
||||||
|
|
|
@ -23,7 +23,8 @@ class JinjaDBConnection:
|
||||||
return self
|
return self
|
||||||
def __exit__(self, exc_type, exc_value, exc_traceback):
|
def __exit__(self, exc_type, exc_value, exc_traceback):
|
||||||
self.conn.close()
|
self.conn.close()
|
||||||
log(f"Generated page with {self.hits}/{self.hits+self.misses} local Cache hits",module="debug_performance")
|
if malojaconfig['USE_REQUEST_CACHE']:
|
||||||
|
log(f"Generated page with {self.hits}/{self.hits+self.misses} local Cache hits",module="debug_performance")
|
||||||
del self.cache
|
del self.cache
|
||||||
def __getattr__(self,name):
|
def __getattr__(self,name):
|
||||||
originalmethod = getattr(database,name)
|
originalmethod = getattr(database,name)
|
||||||
|
|
|
@ -115,8 +115,11 @@ def connection_provider(func):
|
||||||
return func(*args,**kwargs)
|
return func(*args,**kwargs)
|
||||||
else:
|
else:
|
||||||
with engine.connect() as connection:
|
with engine.connect() as connection:
|
||||||
kwargs['dbconn'] = connection
|
with connection.begin():
|
||||||
return func(*args,**kwargs)
|
kwargs['dbconn'] = connection
|
||||||
|
return func(*args,**kwargs)
|
||||||
|
|
||||||
|
wrapper.__innerfunc__ = func
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
##### DB <-> Dict translations
|
##### DB <-> Dict translations
|
||||||
|
@ -439,7 +442,7 @@ def merge_tracks(target_id,source_ids,dbconn=None):
|
||||||
track_id=target_id
|
track_id=target_id
|
||||||
)
|
)
|
||||||
result = dbconn.execute(op)
|
result = dbconn.execute(op)
|
||||||
clean_db()
|
clean_db(dbconn=dbconn)
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -488,8 +491,8 @@ def merge_artists(target_id,source_ids,dbconn=None):
|
||||||
# result = dbconn.execute(op)
|
# result = dbconn.execute(op)
|
||||||
|
|
||||||
# this could have created duplicate tracks
|
# this could have created duplicate tracks
|
||||||
merge_duplicate_tracks(artist_id=target_id)
|
merge_duplicate_tracks(artist_id=target_id,dbconn=dbconn)
|
||||||
clean_db()
|
clean_db(dbconn=dbconn)
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -868,38 +871,37 @@ def search_track(searchterm,dbconn=None):
|
||||||
##### MAINTENANCE
|
##### MAINTENANCE
|
||||||
|
|
||||||
@runhourly
|
@runhourly
|
||||||
def clean_db():
|
@connection_provider
|
||||||
|
def clean_db(dbconn=None):
|
||||||
|
|
||||||
with SCROBBLE_LOCK:
|
log(f"Database Cleanup...")
|
||||||
with engine.begin() as conn:
|
|
||||||
log(f"Database Cleanup...")
|
|
||||||
|
|
||||||
to_delete = [
|
to_delete = [
|
||||||
# tracks with no scrobbles (trackartist entries first)
|
# tracks with no scrobbles (trackartist entries first)
|
||||||
"from trackartists where track_id in (select id from tracks where id not in (select track_id from scrobbles))",
|
"from trackartists where track_id in (select id from tracks where id not in (select track_id from scrobbles))",
|
||||||
"from tracks where id not in (select track_id from scrobbles)",
|
"from tracks where id not in (select track_id from scrobbles)",
|
||||||
# artists with no tracks
|
# artists with no tracks
|
||||||
"from artists where id not in (select artist_id from trackartists) and id not in (select target_artist from associated_artists)",
|
"from artists where id not in (select artist_id from trackartists) and id not in (select target_artist from associated_artists)",
|
||||||
# tracks with no artists (scrobbles first)
|
# tracks with no artists (scrobbles first)
|
||||||
"from scrobbles where track_id in (select id from tracks where id not in (select track_id from trackartists))",
|
"from scrobbles where track_id in (select id from tracks where id not in (select track_id from trackartists))",
|
||||||
"from tracks where id not in (select track_id from trackartists)"
|
"from tracks where id not in (select track_id from trackartists)"
|
||||||
]
|
]
|
||||||
|
|
||||||
for d in to_delete:
|
for d in to_delete:
|
||||||
selection = conn.execute(sql.text(f"select * {d}"))
|
selection = dbconn.execute(sql.text(f"select * {d}"))
|
||||||
for row in selection.all():
|
for row in selection.all():
|
||||||
log(f"Deleting {row}")
|
log(f"Deleting {row}")
|
||||||
deletion = conn.execute(sql.text(f"delete {d}"))
|
deletion = dbconn.execute(sql.text(f"delete {d}"))
|
||||||
|
|
||||||
log("Database Cleanup complete!")
|
log("Database Cleanup complete!")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#if a2+a1>0: log(f"Deleted {a2} tracks without scrobbles ({a1} track artist entries)")
|
#if a2+a1>0: log(f"Deleted {a2} tracks without scrobbles ({a1} track artist entries)")
|
||||||
|
|
||||||
#if a3>0: log(f"Deleted {a3} artists without tracks")
|
#if a3>0: log(f"Deleted {a3} artists without tracks")
|
||||||
|
|
||||||
#if a5+a4>0: log(f"Deleted {a5} tracks without artists ({a4} scrobbles)")
|
#if a5+a4>0: log(f"Deleted {a5} tracks without artists ({a4} scrobbles)")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -920,40 +922,39 @@ def renormalize_names():
|
||||||
rows = conn.execute(DB['artists'].update().where(DB['artists'].c.id == id).values(name_normalized=norm_target))
|
rows = conn.execute(DB['artists'].update().where(DB['artists'].c.id == id).values(name_normalized=norm_target))
|
||||||
|
|
||||||
|
|
||||||
|
@connection_provider
|
||||||
def merge_duplicate_tracks(artist_id):
|
def merge_duplicate_tracks(artist_id,dbconn=None):
|
||||||
with engine.begin() as conn:
|
rows = dbconn.execute(
|
||||||
rows = conn.execute(
|
DB['trackartists'].select().where(
|
||||||
DB['trackartists'].select().where(
|
DB['trackartists'].c.artist_id == artist_id
|
||||||
DB['trackartists'].c.artist_id == artist_id
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
affected_tracks = [r.track_id for r in rows]
|
)
|
||||||
|
affected_tracks = [r.track_id for r in rows]
|
||||||
|
|
||||||
track_artists = {}
|
track_artists = {}
|
||||||
rows = conn.execute(
|
rows = dbconn.execute(
|
||||||
DB['trackartists'].select().where(
|
DB['trackartists'].select().where(
|
||||||
DB['trackartists'].c.track_id.in_(affected_tracks)
|
DB['trackartists'].c.track_id.in_(affected_tracks)
|
||||||
)
|
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
for row in rows:
|
for row in rows:
|
||||||
track_artists.setdefault(row.track_id,[]).append(row.artist_id)
|
track_artists.setdefault(row.track_id,[]).append(row.artist_id)
|
||||||
|
|
||||||
artist_combos = {}
|
artist_combos = {}
|
||||||
for track_id in track_artists:
|
for track_id in track_artists:
|
||||||
artist_combos.setdefault(tuple(sorted(track_artists[track_id])),[]).append(track_id)
|
artist_combos.setdefault(tuple(sorted(track_artists[track_id])),[]).append(track_id)
|
||||||
|
|
||||||
for c in artist_combos:
|
for c in artist_combos:
|
||||||
if len(artist_combos[c]) > 1:
|
if len(artist_combos[c]) > 1:
|
||||||
track_identifiers = {}
|
track_identifiers = {}
|
||||||
for track_id in artist_combos[c]:
|
for track_id in artist_combos[c]:
|
||||||
track_identifiers.setdefault(normalize_name(get_track(track_id)['title']),[]).append(track_id)
|
track_identifiers.setdefault(normalize_name(get_track(track_id)['title']),[]).append(track_id)
|
||||||
for track in track_identifiers:
|
for track in track_identifiers:
|
||||||
if len(track_identifiers[track]) > 1:
|
if len(track_identifiers[track]) > 1:
|
||||||
target,*src = track_identifiers[track]
|
target,*src = track_identifiers[track]
|
||||||
merge_tracks(target,src)
|
merge_tracks(target,src,dbconn=dbconn)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -148,9 +148,9 @@ malojaconfig = Configuration(
|
||||||
"Technical":{
|
"Technical":{
|
||||||
"cache_expire_positive":(tp.Integer(), "Image Cache Expiration", 60, "Days until images are refetched"),
|
"cache_expire_positive":(tp.Integer(), "Image Cache Expiration", 60, "Days until images are refetched"),
|
||||||
"cache_expire_negative":(tp.Integer(), "Image Cache Negative Expiration", 5, "Days until failed image fetches are reattempted"),
|
"cache_expire_negative":(tp.Integer(), "Image Cache Negative Expiration", 5, "Days until failed image fetches are reattempted"),
|
||||||
"db_max_memory":(tp.Integer(min=0,max=100), "RAM Percentage soft limit", 80, "RAM Usage in percent at which Maloja should no longer increase its database cache."),
|
"db_max_memory":(tp.Integer(min=0,max=100), "RAM Percentage soft limit", 50, "RAM Usage in percent at which Maloja should no longer increase its database cache."),
|
||||||
"use_request_cache":(tp.Boolean(), "Use request-local DB Cache", False),
|
"use_request_cache":(tp.Boolean(), "Use request-local DB Cache", False),
|
||||||
"use_global_cache":(tp.Boolean(), "Use global DB Cache", False)
|
"use_global_cache":(tp.Boolean(), "Use global DB Cache", True)
|
||||||
},
|
},
|
||||||
"Fluff":{
|
"Fluff":{
|
||||||
"scrobbles_gold":(tp.Integer(), "Scrobbles for Gold", 250, "How many scrobbles a track needs to be considered 'Gold' status"),
|
"scrobbles_gold":(tp.Integer(), "Scrobbles for Gold", 250, "How many scrobbles a track needs to be considered 'Gold' status"),
|
||||||
|
|
7
maloja/web/jinja/icons/nodata.jinja
Normal file
7
maloja/web/jinja/icons/nodata.jinja
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
<td style="opacity:0.5;text-align:center;">
|
||||||
|
<svg height="96px" viewBox="0 0 24 24" width="96px">
|
||||||
|
<path d="M0 0h24v24H0z" fill="none"/>
|
||||||
|
<path d="M4.27 3L3 4.27l9 9v.28c-.59-.34-1.27-.55-2-.55-2.21 0-4 1.79-4 4s1.79 4 4 4 4-1.79 4-4v-1.73L19.73 21 21 19.73 4.27 3zM14 7h4V3h-6v5.18l2 2z"/>
|
||||||
|
</svg>
|
||||||
|
<br/>No scrobbles yet!
|
||||||
|
</td>
|
|
@ -9,8 +9,12 @@
|
||||||
{% set charts_cycler = cycler(*charts_14) %}
|
{% set charts_cycler = cycler(*charts_14) %}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<table class="tiles_top"><tr>
|
<table class="tiles_top"><tr>
|
||||||
{% for segment in range(3) %}
|
{% for segment in range(3) %}
|
||||||
|
{% if charts_14[0] is none and loop.first %}
|
||||||
|
{% include 'icons/nodata.jinja' %}
|
||||||
|
{% else %}
|
||||||
<td>
|
<td>
|
||||||
{% set segmentsize = segment+1 %}
|
{% set segmentsize = segment+1 %}
|
||||||
<table class="tiles_{{ segmentsize }}x{{ segmentsize }} tiles_sub">
|
<table class="tiles_{{ segmentsize }}x{{ segmentsize }} tiles_sub">
|
||||||
|
@ -35,6 +39,7 @@
|
||||||
</tr>
|
</tr>
|
||||||
{%- endfor -%}
|
{%- endfor -%}
|
||||||
</table>
|
</table>
|
||||||
</td>
|
</td>
|
||||||
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</tr></table>
|
</tr></table>
|
||||||
|
|
|
@ -11,6 +11,9 @@
|
||||||
|
|
||||||
<table class="tiles_top"><tr>
|
<table class="tiles_top"><tr>
|
||||||
{% for segment in range(3) %}
|
{% for segment in range(3) %}
|
||||||
|
{% if charts_14[0] is none and loop.first %}
|
||||||
|
{% include 'icons/nodata.jinja' %}
|
||||||
|
{% else %}
|
||||||
<td>
|
<td>
|
||||||
{% set segmentsize = segment+1 %}
|
{% set segmentsize = segment+1 %}
|
||||||
<table class="tiles_{{ segmentsize }}x{{ segmentsize }} tiles_sub">
|
<table class="tiles_{{ segmentsize }}x{{ segmentsize }} tiles_sub">
|
||||||
|
@ -35,6 +38,7 @@
|
||||||
</tr>
|
</tr>
|
||||||
{%- endfor %}
|
{%- endfor %}
|
||||||
</table>
|
</table>
|
||||||
</td>
|
</td>
|
||||||
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</tr></table>
|
</tr></table>
|
||||||
|
|
|
@ -58,6 +58,10 @@ div.header h1 {
|
||||||
settings icon
|
settings icon
|
||||||
**/
|
**/
|
||||||
|
|
||||||
|
svg {
|
||||||
|
fill: var(--text-color);
|
||||||
|
}
|
||||||
|
|
||||||
div#icon_bar {
|
div#icon_bar {
|
||||||
position:fixed;
|
position:fixed;
|
||||||
right:30px;
|
right:30px;
|
||||||
|
@ -69,14 +73,13 @@ div#icon_bar div.clickable_icon {
|
||||||
height:26px;
|
height:26px;
|
||||||
width:26px;
|
width:26px;
|
||||||
}
|
}
|
||||||
div.clickable_icon {
|
div.clickable_icon svg {
|
||||||
fill: var(--text-color);
|
|
||||||
cursor: pointer;
|
cursor: pointer;
|
||||||
}
|
}
|
||||||
div.clickable_icon:hover {
|
div.clickable_icon:hover svg {
|
||||||
fill: var(--text-color-focus);
|
fill: var(--text-color-focus);
|
||||||
}
|
}
|
||||||
div.clickable_icon.danger:hover {
|
div.clickable_icon.danger:hover svg {
|
||||||
fill: red;
|
fill: red;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -521,6 +524,7 @@ table.list tr {
|
||||||
background-color: var(--current-bg-color);
|
background-color: var(--current-bg-color);
|
||||||
border-color: var(--current-bg-color);
|
border-color: var(--current-bg-color);
|
||||||
height: 1.45em;
|
height: 1.45em;
|
||||||
|
transition: opacity 2s;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -670,6 +674,13 @@ table.list tr.removed {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
table.list tr.changed {
|
||||||
|
/*background-color: rgba(222,209,180,0.7) !important;*/
|
||||||
|
opacity:0;
|
||||||
|
transition: opacity 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
table td.artists div {
|
table td.artists div {
|
||||||
overflow:hidden;
|
overflow:hidden;
|
||||||
|
|
|
@ -43,7 +43,11 @@ function reparseScrobble(id, element) {
|
||||||
callback_func = function(req){
|
callback_func = function(req){
|
||||||
if (req.status == 200) {
|
if (req.status == 200) {
|
||||||
if (req.response.status != 'no_operation') {
|
if (req.response.status != 'no_operation') {
|
||||||
window.location.reload();
|
//window.location.reload();
|
||||||
|
notifyCallback(req);
|
||||||
|
var newtrack = req.response.scrobble.track;
|
||||||
|
var row = element.parentElement.parentElement.parentElement.parentElement;
|
||||||
|
changeScrobbleRow(row,newtrack);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
notifyCallback(req);
|
notifyCallback(req);
|
||||||
|
@ -58,6 +62,43 @@ function reparseScrobble(id, element) {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function changeScrobbleRow(element,newtrack) {
|
||||||
|
element.classList.add('changed');
|
||||||
|
|
||||||
|
setTimeout(function(){
|
||||||
|
element.getElementsByClassName('track')[0].innerHTML = createTrackCell(newtrack);
|
||||||
|
},200);
|
||||||
|
setTimeout(function(){element.classList.remove('changed')},300);
|
||||||
|
}
|
||||||
|
|
||||||
|
function createTrackCell(trackinfo) {
|
||||||
|
|
||||||
|
var trackquery = new URLSearchParams();
|
||||||
|
trackinfo.artists.forEach((a)=>trackquery.append('artist',a));
|
||||||
|
trackquery.append('title',trackinfo.title);
|
||||||
|
|
||||||
|
tracklink = document.createElement('a');
|
||||||
|
tracklink.href = "/track?" + trackquery.toString();
|
||||||
|
tracklink.textContent = trackinfo.title;
|
||||||
|
|
||||||
|
artistelements = []
|
||||||
|
var artistholder = document.createElement('span');
|
||||||
|
artistholder.classList.add('artist_in_trackcolumn');
|
||||||
|
for (var a of trackinfo.artists) {
|
||||||
|
var artistquery = new URLSearchParams();
|
||||||
|
artistquery.append('artist',a);
|
||||||
|
|
||||||
|
artistlink = document.createElement('a');
|
||||||
|
artistlink.href = "/artist?" + artistquery.toString();
|
||||||
|
artistlink.textContent = a;
|
||||||
|
|
||||||
|
artistelements.push(artistlink.outerHTML)
|
||||||
|
}
|
||||||
|
|
||||||
|
artistholder.innerHTML = artistelements.join(", ");
|
||||||
|
return artistholder.outerHTML + " – " + tracklink.outerHTML;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
// EDIT NAME
|
// EDIT NAME
|
||||||
function editEntity() {
|
function editEntity() {
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
[project]
|
[project]
|
||||||
name = "malojaserver"
|
name = "malojaserver"
|
||||||
version = "3.0.6"
|
version = "3.0.7"
|
||||||
description = "Self-hosted music scrobble database"
|
description = "Self-hosted music scrobble database"
|
||||||
readme = "./README.md"
|
readme = "./README.md"
|
||||||
requires-python = ">=3.7"
|
requires-python = ">=3.7"
|
||||||
|
|
Loading…
Reference in New Issue
Block a user