pull_requests
4 rows where repo = 303218369
This data as json, CSV (advanced)
Suggested facets: state, author_association, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | repo | url | merged_by | auto_merge |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
525371029 | MDExOlB1bGxSZXF1ZXN0NTI1MzcxMDI5 | 8 | closed | 0 | fix import error if note has no "updated" element | mkorosec 4028322 | I got the following error when executing evernote-to-sqlite enex evernote.db evernote.enex ``` ... File "evernote_to_sqlite/cli.py", line 31, in enex save_note(db, note) File "evernote_to_sqlite/utils.py", line 28, in save_note updated = note.find("updated").text AttributeError: 'NoneType' object has no attribute 'text' ``` Seems that in some cases the updated element is not added to the note, this is a part of the problematic note: ``` <created>20201019T074518Z</created> <note-attributes> <source>web.clip7</source> <source-application>webclipper.evernote</source-application> </note-attributes> ``` | 2020-11-22T22:51:05Z | 2021-02-11T22:34:06Z | 2021-02-11T22:34:06Z | 2021-02-11T22:34:06Z | 1c8457ddaa487aa2e677963d37217fcb6d544e59 | 0 | 03b0c240d5f12c2d651c4cb25f92b0fecc7f7419 | 1c355e5678877e14eefa2a5fab5a267342a03335 | CONTRIBUTOR | evernote-to-sqlite 303218369 | https://github.com/dogsheep/evernote-to-sqlite/pull/8 | ||||
542406910 | MDExOlB1bGxSZXF1ZXN0NTQyNDA2OTEw | 10 | closed | 0 | BugFix for encoding and not update info. | riverzhou 1277270 | Bugfix 1: Traceback (most recent call last): File "d:\anaconda3\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "d:\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\Anaconda3\Scripts\evernote-to-sqlite.exe\__main__.py", line 7, in <module> File "d:\anaconda3\lib\site-packages\click\core.py", line 829, in __call__ File "d:\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "d:\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) return ctx.invoke(self.callback, **ctx.params) File "d:\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "d:\anaconda3\lib\site-packages\evernote_to_sqlite\cli.py", line 30, in enex for tag, note in find_all_tags(fp, ["note"], progress_callback=bar.update): File "d:\anaconda3\lib\site-packages\evernote_to_sqlite\utils.py", line 11, in find_all_tags chunk = fp.read(1024 * 1024) UnicodeDecodeError: 'gbk' codec can't decode byte 0xa4 in position 383: illegal multibyte sequence Bugfix 2: Traceback (most recent call last): File "D:\Anaconda3\Scripts\evernote-to-sqlite-script.py", line 33, in <module> sys.exit(load_entry_point('evernote-to-sqlite==0.3', 'console_scripts', 'evernote-to-sqlite')()) File "D:\Anaconda3\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "D:\Anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "D:\Anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "D:\Anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "D:\Anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kw… | 2020-12-18T08:58:54Z | 2021-02-11T22:37:56Z | 2021-02-11T22:37:56Z | 4425daeccd43ce3c7bb45deaae577984f978e40f | 0 | 7b8b96b69f43cb2247875c3ca6d39878edf77a78 | 92254b71075c8806bca258c939e24af8397cdf98 | NONE | evernote-to-sqlite 303218369 | https://github.com/dogsheep/evernote-to-sqlite/pull/10 | |||||
645100848 | MDExOlB1bGxSZXF1ZXN0NjQ1MTAwODQ4 | 12 | open | 0 | Recovering of malformed ENEX file | engdan77 8431437 | Hey .. Awesome work developing this project, that I found very useful to me and saved me some work.. Thanks.. :) Some background to this PR... I've been searching around for a tool allowing me to transforming my personal collection of Evernote notes to a format easier to search and potentially easier import to future services. Now I discovered problem processing my large data ~5GB using the existing source using Pythons builtin xml-parser that unfortunately was unable to succeed without exception breaking the process. My first attempt I tried to adapt to more robust lxml package allowing huge data and with "recover", but even if it worked better it also failed processing the whole data. Even using the memory efficient etree.iterparse() it also unfortunately got into trouble. And with no luck finding any other libraries successfully parsing this enormous file I instead chose to build a "hugexmlparser" module that allows parsing this huge file using yield (on a byte-to-byte-level) and allows you to set a maximum size for <note> to cater for potential malformed or undesirable large attachments to export, should succeed covering potential exceptions. Some cases found where the parses discover malformed XML within <content> so also in those cases try to save as much as possible by escaping (to be dealt at a later stage, better than nothing), and if a missing end </note> before new (malformed?) it would add this after encounter a new start-tag. The code for the recovery process is a bit rough and for certain room for refactoring, but at the moment is seem to achieve what I wanted. Now with the above we pass this a minor changed version of save_note_recovery() assure the existing works. Also adding this as a new recover-enex command to click and kept the original options. A couple of new tests was added as well to check against using this command. Now this currently works to me, but thought I might share a PR in such as you find use for this yourself or found useful to others finding this rep… | 2021-05-15T07:49:31Z | 2021-05-15T19:57:50Z | 95f21ca163606db74babd036e6fa44b7d484d137 | 0 | a5839dadaa43694f208ad74a53670cebbe756956 | 0bc6ba503eecedb947d2624adbe1327dd849d7fe | FIRST_TIME_CONTRIBUTOR | evernote-to-sqlite 303218369 | https://github.com/dogsheep/evernote-to-sqlite/pull/12 | ||||||
771790589 | PR_kwDOEhK-wc4uAJb9 | 15 | open | 0 | include note tags in the export | d-rep 436138 | When parsing the Evernote `<note>` elements, the script will now also parse any nested `<tag>` elements, writing them out into a separate sqlite table. Here is an example of how to query the data after the script has run: ``` select notes.*, (select group_concat(tag) from notes_tags where notes_tags.note_id=notes.id) as tags from notes; ``` My .enex source file is 3+ years old so I am assuming the structure hasn't changed. Interestingly, my _notebook names_ show up in the _tags_ list where the tag name is prefixed with `notebook_`, so this could maybe help work around the first limitation mentioned in the [evernote-to-sqlite blog post](https://simonwillison.net/2020/Oct/16/building-evernote-sqlite-exporter/). | 2021-11-02T20:04:31Z | 2021-11-02T20:04:31Z | ee36aba995b0a5385bdf9a451851dcfc316ff7f6 | 0 | 8cc3aa49c6e61496b04015c14048c5dac58d6b42 | fff89772b4404995400e33fe1d269050717ff4cf | FIRST_TIME_CONTRIBUTOR | evernote-to-sqlite 303218369 | https://github.com/dogsheep/evernote-to-sqlite/pull/15 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) , [auto_merge] TEXT); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);