Compare commits

...

183 commits
v0.5.0 ... main

Author SHA1 Message Date
Barrett Ruth
ff5ba39a59 docs: fix dependencies section in readme
Problem: time and timeout were listed as optional dependencies despite
being required for plugin initialization. nix was not mentioned as an
alternative to uv for the Python scraping environment.

Solution: rename section to "Dependencies", list time/timeout first,
and add nix as an alternative to uv for scraping.
2026-02-21 23:59:40 -05:00
Barrett Ruth
760e7d7731 fix(ci): format 2026-02-20 17:49:34 -05:00
Barrett Ruth
49e4233b3f fix: decouple python env setup from config init
Problem: setup_python_env() is called from check_required_runtime()
during config.setup(), which runs on the very first :CP command. The
uv sync and nix build calls use vim.system():wait(), blocking the
Neovim event loop. During the block the UI is frozen and
vim.schedule-based log messages never render, so the user sees an
unresponsive editor with no feedback.

Solution: remove setup_python_env() from check_required_runtime() so
config init is instant. Call it lazily from run_scraper() instead,
only when a scraper subprocess is actually needed. Use vim.notify +
vim.cmd.redraw() before blocking calls so the notification renders
immediately via a forced screen repaint, rather than being queued
behind vim.schedule.
2026-02-18 17:49:04 -05:00
Barrett Ruth
622620f6d0 feat: add debug logging to python env, scraper, and runner
Problem: with debug = true, there is not enough diagnostic output to
troubleshoot environment or execution issues. The resolved python path,
scraper commands, and compile/run shell commands are not logged.

Solution: add logger.log calls at key decision points: python env
resolution (nix vs uv vs discovery), uv sync stderr output, scraper
subprocess commands, and compile/run shell strings. All gated behind
the existing debug flag so they only appear when debug = true.
2026-02-18 17:40:06 -05:00
Barrett Ruth
976838d981 fix: always run uv sync to recover from partial installs
Problem: setup_python_env() skips uv sync when .venv/ exists. If a
previous sync was interrupted (e.g. network timeout), the directory
exists but is broken, and every subsequent session silently uses a
corrupt environment.

Solution: remove the isdirectory guard and always run uv sync. It is
idempotent and near-instant when dependencies are already installed,
so the only cost is one subprocess call per session.
2026-02-18 17:32:12 -05:00
Barrett Ruth
06f72bbe2b fix: only show user-configured platforms in picker
Problem: tbl_deep_extend merges user platforms on top of defaults, so
all four default platforms survive even when the user only configures a
subset. The picker then shows platforms the user never intended to use.

Solution: before the deep merge, prune any default platform not present
in the user's platforms table. This preserves per-platform default
filling (the user doesn't have to re-specify every field) while ensuring
only explicitly configured platforms appear.
2026-02-18 17:29:41 -05:00
Barrett Ruth
6045042dfb fix: surface runtime check failures as clean notifications
Problem: when required dependencies (GNU time/timeout, Python env) are
missing, config.setup() throws a raw error() that surfaces as a Lua
traceback. On macOS without coreutils the message is also redundant
("GNU time not found: GNU time not found") and offers no install hint.

Solution: wrap config.setup() in pcall inside ensure_initialized(),
strip the Lua source-location prefix, and emit a vim.notify at ERROR
level. Add Darwin-specific install guidance to the GNU time/timeout
not-found messages. Pass capability reasons directly instead of
wrapping them in a redundant outer message.
2026-02-18 17:25:50 -05:00
Barrett Ruth
c192afc5d7 fix(ci): format 2026-02-18 14:13:37 -05:00
Barrett Ruth
b6f3398bbc fix(ci): formatting and typing 2026-02-18 14:13:37 -05:00
Barrett Ruth
e02a29bd40 fix(ci): remove duplicate workflows 2026-02-18 14:13:37 -05:00
Barrett Ruth
0f9715298e fix(ci): remove deprecated setups 2026-02-18 14:13:37 -05:00
Barrett Ruth
2148d9bd07 feat(nix): add health 2026-02-18 14:13:37 -05:00
Barrett Ruth
1162e7046b try to fix the setup 2026-02-18 14:13:37 -05:00
Barrett Ruth
b36ffba63a feat(nix): initial flake config; 2026-02-18 14:13:37 -05:00
Barrett Ruth
04d0c124cf
fix: remove flake config 2026-02-17 21:11:11 -05:00
Barrett Ruth
da433068ef
remove straggler file 2026-02-17 21:10:56 -05:00
Barrett Ruth
51504b0121
fix: flake config; 2026-02-17 21:10:29 -05:00
Barrett Ruth
49df7e015d docs: add setup section and reorder vimdoc
Problem: the vimdoc had no setup section, and configuration was buried
after commands and mappings.

Solution: add a cp-setup section with lazy.nvim example and move both
setup and configuration above commands for better discoverability.
2026-02-17 21:09:58 -05:00
Barrett Ruth
029ea125b9 feat: add <Plug> mappings for all primary actions
Problem: users who want keybindings must call vim.cmd('CP run') or
reach into internal Lua modules directly. There is no stable,
discoverable, lazy-load-friendly public API for key binding.

Solution: define 7 <Plug> mappings in plugin/cp.lua that dispatch
through the same handle_command() code path as :CP. Document them
in a new MAPPINGS section in the vimdoc with helptags and an example
config block.
2026-02-07 13:23:45 -05:00
Barrett Ruth
43193c3762
Merge pull request #239 from barrettruth/refactor/remove-cp-config-compat
refactor: remove vim.g.cp_config compatibility shim
2026-02-06 16:42:41 -05:00
Barrett Ruth
de2bc07532 refactor: remove vim.g.cp_config compatibility shim
Problem: the deprecated vim.g.cp_config fallback was kept for
backwards compatibility after the rename to vim.g.cp in v0.7.6.

Solution: drop the shim entirely and update the setup() deprecation
target to v0.7.7.
2026-02-06 16:40:39 -05:00
Barrett Ruth
041e09ac04
Merge pull request #238 from barrettruth/fix/setup-code-hook-language
fix(setup): set language state before setup_code hook on first open
2026-02-06 16:38:41 -05:00
Barrett Ruth
d23b4e59d1 fix(setup): set language state before setup_code hook on first open
Problem: when opening a contest for the first time (metadata not
cached), the setup_code hook fired before state.set_language() was
called, causing state.get_language() to return nil inside the hook.

Solution: call state.set_language(lang) before the hook in the
provisional-buffer branch of setup_contest(). The value is already
computed at that point and is identical to what setup_problem() sets
later, so the early write is idempotent.
2026-02-06 16:29:46 -05:00
Barrett Ruth
19e71ac7fa
Merge pull request #237 from barrettruth/feat/vim-g-update
refactor: rename `vim.g.cp_config` to `vim.g.cp`
2026-02-06 16:07:03 -05:00
Barrett Ruth
a54a06f939 refactor: rename vim.g.cp_config to vim.g.cp 2026-02-06 15:16:21 -05:00
Barrett Ruth
b2c7f16890
Merge pull request #234 from barrettruth/fix/deprecation-warning
fix: add deprecation warning for setup()
2026-02-03 21:51:19 -05:00
Barrett Ruth
276241447c fix: add deprecation warning for setup() 2026-02-03 21:46:47 -05:00
Barrett Ruth
dc635d5167 chore: add issue templates 2026-02-03 21:07:01 -05:00
Barrett Ruth
81ddd1ea87
Merge pull request #231 from barrettruth/fix/config
use `vim.g` for setup
2026-02-03 16:14:19 -05:00
Barrett Ruth
7444a99b22
Merge branch 'main' into fix/config 2026-02-03 16:13:35 -05:00
Barrett Ruth
ec487aa489 feat: config update to viom.g 2026-02-03 16:12:47 -05:00
Barrett Ruth
c4af9bf604
Merge pull request #228 from barrettruth/fix/doc
via, not main
2026-02-03 01:51:38 -05:00
Barrett Ruth
a4437bc1c6
Merge branch 'main' into fix/doc 2026-02-03 01:50:46 -05:00
Barrett Ruth
1a7e9517ba force 2026-02-03 01:50:22 -05:00
Barrett Ruth
11b8365aac via, not main 2026-02-03 01:49:47 -05:00
Barrett Ruth
585ebf0daf
Merge pull request #227 from barrettruth/fix/doc
update installation method
2026-02-03 01:43:56 -05:00
Barrett Ruth
08fb654d23 format yml too in pre-commit 2026-02-03 01:43:13 -05:00
Barrett Ruth
01efc7c344 fix(ci): prettier format 2026-02-03 01:41:35 -05:00
Barrett Ruth
f9f993db0c fix: pre-commit syntax error 2026-02-03 01:39:26 -05:00
Barrett Ruth
f184a7874a feat: update docs 2026-02-03 01:38:13 -05:00
Barrett Ruth
89e3c0e21d
Merge pull request #226 from barrettruth/feat/dir-bug
misc bugfixes
2026-02-02 13:16:46 -05:00
Barrett Ruth
a9ce31a291
Merge branch 'main' into feat/dir-bug 2026-02-02 13:13:41 -05:00
Barrett Ruth
c8f735617a misc bugfixes 2026-02-02 13:13:08 -05:00
Barrett Ruth
a14f543371
Merge pull request #225 from barrettruth/fix/rockspec
fix username docs
2026-02-01 17:13:00 -05:00
Barrett Ruth
56ec178cdd
Merge branch 'main' into fix/rockspec 2026-02-01 17:12:38 -05:00
Barrett Ruth
5cd6f75419 fix username too 2026-02-01 17:11:51 -05:00
Barrett Ruth
99d907aa7a
Merge pull request #224 from barrettruth/fix/rockspec
fix rockspec url for new username
2026-02-01 17:02:22 -05:00
Barrett Ruth
c06d819597 fix(ci): fix rockspec url 2026-02-01 17:01:29 -05:00
Barrett Ruth
682b267019
Merge pull request #223 from barrettruth/fix/ci
fix ci
2026-01-27 17:20:43 -06:00
Barrett Ruth
8a2871ec1b
Merge branch 'main' into fix/ci 2026-01-27 17:20:25 -06:00
Barrett Ruth
de1295d361 fix ci 2026-01-27 18:19:49 -05:00
Barrett Ruth
32f449850b
Merge pull request #222 from barrettruth/fix/ci
feat: misc tests
2026-01-27 17:16:16 -06:00
Barrett Ruth
6966e8e101 feat: misc tests 2026-01-27 18:14:54 -05:00
Barrett Ruth
a5e094d44a
Merge pull request #221 from barrettruth/fix/ci
fix(ci): only run on tag push
2026-01-27 17:12:10 -06:00
Barrett Ruth
5de6fb2fee
Merge branch 'main' into fix/ci 2026-01-27 17:10:47 -06:00
Barrett Ruth
bd25f1db0b fix(ci): only run on tag push 2026-01-27 18:09:57 -05:00
Barrett Ruth
9daa4e4ec4
Merge pull request #220 from barrettruth/fix/ci
run luarocks build on successful ci
2026-01-27 17:06:12 -06:00
Barrett Ruth
0b5c0f0c40 fix(ci): only run luarocks build on successful ci 2026-01-27 18:04:56 -05:00
Barrett Ruth
af559b0fa3
Merge pull request #219 from barrettruth/fix/misc
improve config validation
2026-01-27 16:38:03 -06:00
Barrett Ruth
d496509fce feat(config): improve config parsing phrasing 2026-01-27 17:33:16 -05:00
Barrett Ruth
383b327442 fix(config): validate scraper names better 2026-01-27 17:32:21 -05:00
Barrett Ruth
3f677137de fix(config): one of validation 2026-01-27 17:27:15 -05:00
Barrett Ruth
0a1cea9b43 feat: debug 2026-01-27 17:25:03 -05:00
Barrett Ruth
6ba51a92c2
Merge pull request #218 from barrettruth/fix/scraper-refactor
misc tweaks
2026-01-27 16:22:08 -06:00
Barrett Ruth
86f2e41983
Merge branch 'main' into fix/scraper-refactor 2026-01-27 16:20:44 -06:00
Barrett Ruth
d89a40b21f feat: update git formatting 2026-01-27 17:18:52 -05:00
Barrett Ruth
3348ac3e51 feat: improve formatting 2026-01-27 16:48:04 -05:00
Barrett Ruth
ee38da5074 feat(layout): change formatting 2026-01-27 16:47:50 -05:00
Barrett Ruth
9af359eb01 feat(layout): cleanup mode labels 2026-01-27 16:47:42 -05:00
Barrett Ruth
0b21d02f24 fix(runner): save buffer before compile 2026-01-27 16:42:16 -05:00
Barrett Ruth
282d701327 fix: minor log msg tweak 2026-01-27 16:10:00 -05:00
Barrett Ruth
dcadf7447d
Merge pull request #215 from barrettruth/fix/scraper-refactor
refactor scrapers
2026-01-27 15:06:05 -06:00
Barrett Ruth
89c1a3c683 fix(ci): more fixes 2026-01-27 15:56:34 -05:00
Barrett Ruth
83514c453e fix(ci): remove unused import 2026-01-27 15:48:26 -05:00
Barrett Ruth
d5c6783124 feat(scrapers): refactor 2026-01-27 15:43:40 -05:00
Barrett Ruth
5293515aca feat(scrapers): refactor 2026-01-27 14:44:08 -05:00
Barrett Ruth
7dafb7ea43
Merge pull request #214 from barrettruth/feat/highlights
use default neovim group highlights
2026-01-27 13:33:01 -06:00
Barrett Ruth
0f82ae4fdb Merge branch 'main' into feat/highlights 2026-01-27 14:31:23 -05:00
Barrett Ruth
873ddee0d4 fix(doc): feature-parity 2026-01-27 14:30:22 -05:00
Barrett Ruth
fb7888b83c feat(highlight): use default highlights 2026-01-27 14:27:41 -05:00
Barrett Ruth
ae7b571b68
Merge pull request #212 from barrettruth/feat/async
make `:CP {run,panel}` asynchronous
2026-01-27 13:25:15 -06:00
Barrett Ruth
4c5c44742e feat: refactors 2026-01-27 14:23:23 -05:00
Barrett Ruth
d4c5f08b5f fix(render): change pending status text to symbol 2026-01-27 13:31:07 -05:00
Barrett Ruth
0f513370ac fix(render): fix table render in partial state 2026-01-27 13:30:32 -05:00
Barrett Ruth
8969dbccf8 fix(panel): table rendering 2026-01-27 13:18:11 -05:00
Barrett Ruth
ba26cee7f9 feat(run): make running entirely asynchronous 2026-01-27 12:55:35 -05:00
Barrett Ruth
b88e2ce746
Merge pull request #211 from barrettruth/fix/disappearing-io-view
fix `:CP {prev,next}` race condition
2026-01-27 11:28:16 -06:00
Barrett Ruth
c8c0da6d61 fix(ci): format 2026-01-27 12:27:09 -05:00
Barrett Ruth
d40d80c541 fix: race condition & logs 2026-01-27 12:22:53 -05:00
Barrett Ruth
4369fe8b0c
Merge pull request #210 from barrettruth/feat/about
motivation
2026-01-15 17:08:34 -06:00
Barrett Ruth
363a1e88e9 fix(ci): format 2026-01-15 18:08:03 -05:00
Barrett Ruth
702cce959d feat(docs): add motivatoin 2026-01-15 18:03:49 -05:00
Barrett Ruth
ebeed1887d
Merge pull request #209 from barrettruth/barrettruth-patch-1
Update README.md
2026-01-10 11:04:56 -06:00
Barrett Ruth
48bafffcde
Update README.md 2026-01-10 11:04:44 -06:00
Barrett Ruth
b85113b805
Merge pull request #208 from barrett-ruth/feat/buf-cleanup
buffer cleanup mgmt
2025-12-31 13:08:19 -06:00
Barrett Ruth
fa45d912b8 close out other bufs on source buf close 2025-12-31 13:06:25 -06:00
Barrett Ruth
d613d3d24a
Merge pull request #206 from barrett-ruth/fix/logs
fix debug msg
2025-12-18 14:43:24 -06:00
Barrett Ruth
445059a498 fix(runner): proper debug msg 2025-12-18 14:43:03 -06:00
Barrett Ruth
e0596aefff
Merge pull request #205 from barrett-ruth/fix/logging
add necessary logging
2025-12-14 16:35:51 -06:00
Barrett Ruth
3a0c0de599 another log statement 2025-12-14 16:30:10 -06:00
Barrett Ruth
10b3dcd846 fix: add debug log 2025-12-14 16:23:14 -06:00
Barrett Ruth
edb341ae51
Merge pull request #202 from barrett-ruth/fix/notice
use `scrapling.Fetcher.get`, not `scrapling.StealthyFetcher.fetch`
2025-12-08 19:48:15 -06:00
Barrett Ruth
dfd8275421 fix: use a diff scraper for now 2025-12-08 19:46:14 -06:00
Barrett Ruth
680a22f303
Merge pull request #201 from barrett-ruth/fix/notice
update docs for that scrapling DOES work
2025-12-08 19:42:31 -06:00
Barrett Ruth
eb3f93587f fix(docs): scrapling DOES work 2025-12-08 19:21:30 -06:00
Barrett Ruth
9926965677
Merge pull request #200 from barrett-ruth/fix/miscl
update uv pks
2025-12-08 19:16:37 -06:00
Barrett Ruth
c7f573a93b install deps 2025-12-08 00:44:44 -06:00
Barrett Ruth
ac51b2c799 fix(scraper): done 2025-12-08 00:20:48 -06:00
Barrett Ruth
ecd76795ce feat: add envrc 2025-12-07 16:19:42 -06:00
Barrett Ruth
3c3e6172fc fix(ci): rename parameter for type-checking 2025-12-07 16:14:00 -06:00
Barrett Ruth
f805251762 some misc fixes 2025-12-07 16:09:17 -06:00
Barrett Ruth
6647e4120e fix: remove debug script 2025-12-07 15:40:10 -06:00
Barrett Ruth
06f8627331 fix: update pkgs 2025-12-07 15:38:56 -06:00
Barrett Ruth
5b43b64401
Merge pull request #199 from barrett-ruth/feat/misc-fixups
improve error message
2025-12-04 18:21:07 -05:00
Barrett Ruth
99109f5e91 fix: cleanup picker message 2025-12-04 18:12:10 -05:00
Barrett Ruth
944d37dc75 fix(git): ignore node_modules 2025-12-04 18:10:22 -05:00
Barrett Ruth
f91fbb2ca0
Merge pull request #196 from barrett-ruth/fix/uv
fix: fix uv conflict
2025-11-28 23:46:34 -05:00
Barrett Ruth
bbe04589b8 fix: fix uv conflict 2025-11-28 23:45:17 -05:00
Barrett Ruth
6aca33e371
Merge pull request #195 from barrett-ruth/fix/ci
easier uv install in ci
2025-11-28 01:38:28 -05:00
Barrett Ruth
675917796d fix(ci): easier uv install 2025-11-28 01:37:08 -05:00
Barrett Ruth
e12b39bda1
Merge pull request #194 from barrett-ruth/fid
x
2025-11-28 01:31:01 -05:00
Barrett Ruth
c9769e04b8 x 2025-11-28 01:30:48 -05:00
Barrett Ruth
864e6ceeae
Merge pull request #193 from barrett-ruth/fid
run pre-commit prettier on allf files
2025-11-28 00:29:35 -05:00
Barrett Ruth
9cc2b52111 c;eanup 2025-11-28 00:28:21 -05:00
Barrett Ruth
dcf8150cb2
Merge pull request #192 from barrett-ruth/feat/io/cp-test-case
open io view after cp test validation
2025-11-06 01:48:07 -05:00
Barrett Ruth
71863fde7f fix(io): validate view later 2025-11-06 01:46:10 -05:00
Barrett Ruth
5bcee87892
Merge pull request #191 from barrett-ruth/feat/io/view-togggle
misc io view fixups
2025-11-06 01:40:41 -05:00
Barrett Ruth
00987bb0ff feat(io): cleanup view 2025-11-06 01:31:50 -05:00
Barrett Ruth
d121784de5
Merge pull request #190 from barrett-ruth/feat/io/view-togggle
io view toggle + scraper fixes
2025-11-06 00:20:05 -05:00
Barrett Ruth
07e4372a4a cleanup 2025-11-06 00:18:09 -05:00
Barrett Ruth
0e778a128e Merge main into feat/io/view-togggle
Resolved conflicts:
- scrapers/atcoder.py: kept defensive if tests else '' checks
- scrapers/codechef.py: kept defensive if tests else '' checks
- tests/test_scrapers.py: kept comprehensive validation from main
- lua/cp/ui/views.lua: removed misplaced navigation code from loop
2025-11-05 23:01:04 -05:00
Barrett Ruth
d0f1dbf132 cleanup 2025-11-05 19:23:30 -05:00
Barrett Ruth
5995ded7d5
Merge pull request #189 from barrett-ruth/feat/multi-test-case
Multi-Test Case View
2025-11-05 19:23:09 -05:00
Barrett Ruth
e7ba6b4bb4 fix(test): update scrapers 2025-11-05 18:43:01 -05:00
Barrett Ruth
7d8d00c5ad fix(ui): correct output buf 2025-11-05 13:10:17 -05:00
Barrett Ruth
13d931ed19 feat: update 2025-11-05 12:47:38 -05:00
Barrett Ruth
96c01bf796 cleanup 2025-11-04 23:47:06 -05:00
Barrett Ruth
127de3d6a5 fix 2025-11-04 23:39:43 -05:00
Barrett Ruth
6a1534124d fix(ci): formatting 2025-11-04 22:16:49 -05:00
Barrett Ruth
8237dc4c16 fix(ci) upgrade python format 2025-11-04 22:16:08 -05:00
Barrett Ruth
cea90dbda5 preliminary updates 2025-11-04 22:10:42 -05:00
Barrett Ruth
1b0d5e4d77 feat: fix typign 2025-11-04 22:08:07 -05:00
Barrett Ruth
bd557ab069 feat(doc): fix 2025-11-04 21:57:47 -05:00
Barrett Ruth
e1c8c4beaf feat(cli): :CP run with numbered test cases 2025-11-04 21:45:45 -05:00
Barrett Ruth
71efb24cda fix 2025-11-04 21:32:51 -05:00
Barrett Ruth
aab211902e feat: multi-test case view 2025-11-04 21:32:40 -05:00
Barrett Ruth
6477fdc20c
Merge pull request #186 from barrett-ruth/feat/io/multi-test-case
Multi-Test Case View
2025-11-04 08:35:11 -05:00
Barrett Ruth
9238118fbe fix(ci): formatting 2025-11-04 08:33:56 -05:00
Barrett Ruth
6a61780928 fix(ci): typing 2025-11-04 08:19:14 -05:00
Barrett Ruth
fef73887e4 feat(io): multi-test case view 2025-11-04 08:15:08 -05:00
Barrett Ruth
3654748632 fix(scrapers): fix multi-test case codeforces running 2025-11-02 22:42:05 -05:00
Barrett Ruth
73c91e2b28
Merge pull request #185 from barrett-ruth/cleanup
cleanup
2025-10-31 23:27:23 -04:00
Barrett Ruth
91f85d066d cleanup 2025-10-31 23:24:35 -04:00
Barrett Ruth
71a6aac826
Merge pull request #184 from barrett-ruth/fix/codeforces-problem-url
fix(codeforces): correct problem url
2025-10-31 21:47:36 -04:00
Barrett Ruth
7bfa839c84 fix(codeforces): correct problem url 2025-10-31 21:47:15 -04:00
Barrett Ruth
6a2f58430d
Merge pull request #183 from barrett-ruth/feat/codechef
format codechef
2025-10-25 02:03:22 -04:00
Barrett Ruth
161c4cc113 fix(ci): format fixutres 2025-10-25 02:01:48 -04:00
Barrett Ruth
9b1f97dfec
Merge pull request #182 from barrett-ruth/barrett-ruth-patch-1
Update README.md
2025-10-25 02:01:17 -04:00
Barrett Ruth
701d70a7ae
Update README.md 2025-10-25 01:59:27 -04:00
Barrett Ruth
8fd4ce9651
Merge pull request #179 from barrett-ruth/feat/codechef
add codechef platform
2025-10-25 01:43:32 -04:00
Barrett Ruth
e89c2e1cf5 feat(codechef): finalize codechef impl 2025-10-25 01:41:55 -04:00
Barrett Ruth
f78e43bdd4 fix paths 2025-10-25 00:42:03 -04:00
Barrett Ruth
2ab03e624c fix rest of routes 2025-10-25 00:37:30 -04:00
Barrett Ruth
fa3de99222 fix(test): relocate fixtures 2025-10-25 00:37:19 -04:00
Barrett Ruth
4fe623c806 fix(test): refactor fixtures 2025-10-25 00:34:56 -04:00
Barrett Ruth
8ba2a598fe fix(tests): refactor fixture directory 2025-10-25 00:34:32 -04:00
Barrett Ruth
2fda5a74ca feat: codechef 2025-10-25 00:26:33 -04:00
Barrett Ruth
401494aab0
Merge pull request #178 from barrett-ruth/feat/ui/remove-extra-line
close all buffers on edit in ui mode
2025-10-24 21:43:32 -04:00
Barrett Ruth
9b90e3a452 feat(ui): close all buffers on edit 2025-10-24 21:40:13 -04:00
Barrett Ruth
5de81e55a9
Merge pull request #177 from barrett-ruth/feat/ui/remove-extra-line
remove extra line from test cases
2025-10-24 21:36:33 -04:00
Barrett Ruth
8345d147cf fix(ui): remove extra line from test cases 2025-10-24 21:31:03 -04:00
Barrett Ruth
1d89fa0bdd
Merge pull request #176 from barrett-ruth/feat/ui/test-case-editing
test case management
2025-10-24 17:10:54 -04:00
Barrett Ruth
a45657c583
Merge pull request #174 from barrett-ruth/feat/edit
test case editor
2025-10-24 16:15:04 -04:00
Barrett Ruth
c857b66998
Merge pull request #173 from barrett-ruth/feat/lang
fix language-based problem navigation
2025-10-24 14:27:18 -04:00
Barrett Ruth
0790fa7d6f
Merge pull request #172 from barrett-ruth/barrett-ruth-patch-1
Update README.md
2025-10-24 11:15:58 -04:00
Barrett Ruth
3822348642
Update README.md 2025-10-24 11:15:46 -04:00
Barrett Ruth
0418ef4613
Merge pull request #166 from barrett-ruth/feat/lang
language options with `--lang`
2025-10-24 01:45:55 -04:00
Barrett Ruth
36e75ad71b
Merge pull request #165 from barrett-ruth/feat/config/format
improve config flexibility
2025-10-24 00:39:48 -04:00
Barrett Ruth
f9a1f79aef
Merge pull request #164 from barrett-ruth/feat/debug
`--debug` flag
2025-10-23 23:59:09 -04:00
Barrett Ruth
743c29e634
Merge pull request #163 from barrett-ruth/feat/ui/alignment
ui alignment
2025-10-23 23:21:02 -04:00
Barrett Ruth
1becd25cc0
Merge pull request #161 from barrett-ruth/feat/ui/io-view
io view
2025-10-23 22:35:11 -04:00
Barrett Ruth
52a4286b70
Merge pull request #160 from barrett-ruth/feat/window-state
add solution window to state
2025-10-23 10:10:42 -04:00
Barrett Ruth
f0edb103ce
Merge pull request #159 from barrett-ruth/fix/panel-rename
fix: rename run panel to panel
2025-10-23 10:04:57 -04:00
63 changed files with 8129 additions and 2695 deletions

13
.busted
View file

@ -1,13 +0,0 @@
return {
_all = {
coverage = false,
lpath = 'lua/?.lua;lua/?/init.lua',
lua = 'nlua',
},
default = {
verbose = true,
},
tests = {
verbose = true,
},
}

78
.github/ISSUE_TEMPLATE/bug_report.yaml vendored Normal file
View file

@ -0,0 +1,78 @@
name: Bug Report
description: Report a bug
title: 'bug: '
labels: [bug]
body:
- type: checkboxes
attributes:
label: Prerequisites
options:
- label:
I have searched [existing
issues](https://github.com/barrettruth/cp.nvim/issues)
required: true
- label: I have updated to the latest version
required: true
- type: textarea
attributes:
label: 'Neovim version'
description: 'Output of `nvim --version`'
render: text
validations:
required: true
- type: input
attributes:
label: 'Operating system'
placeholder: 'e.g. Arch Linux, macOS 15, Ubuntu 24.04'
validations:
required: true
- type: textarea
attributes:
label: Description
description: What happened? What did you expect?
validations:
required: true
- type: textarea
attributes:
label: Steps to reproduce
description: Minimal steps to trigger the bug
value: |
1.
2.
3.
validations:
required: true
- type: textarea
attributes:
label: 'Health check'
description: 'Output of `:checkhealth cp`'
render: text
- type: textarea
attributes:
label: Minimal reproduction
description: |
Save the script below as `repro.lua`, edit if needed, and run:
```
nvim -u repro.lua
```
Confirm the bug reproduces with this config before submitting.
render: lua
value: |
vim.env.LAZY_STDPATH = '.repro'
load(vim.fn.system('curl -s https://raw.githubusercontent.com/folke/lazy.nvim/main/bootstrap.lua'))()
require('lazy.nvim').setup({
spec = {
{
'barrett-ruth/cp.nvim',
opts = {},
},
},
})
validations:
required: true

5
.github/ISSUE_TEMPLATE/config.yaml vendored Normal file
View file

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Questions
url: https://github.com/barrettruth/cp.nvim/discussions
about: Ask questions and discuss ideas

View file

@ -0,0 +1,30 @@
name: Feature Request
description: Suggest a feature
title: 'feat: '
labels: [enhancement]
body:
- type: checkboxes
attributes:
label: Prerequisites
options:
- label:
I have searched [existing
issues](https://github.com/barrettruth/cp.nvim/issues)
required: true
- type: textarea
attributes:
label: Problem
description: What problem does this solve?
validations:
required: true
- type: textarea
attributes:
label: Proposed solution
validations:
required: true
- type: textarea
attributes:
label: Alternatives considered

View file

@ -1,18 +1,21 @@
name: Release
name: luarocks
on:
push:
tags:
- '*'
workflow_dispatch:
- 'v*'
jobs:
publish-luarocks:
name: Publish to LuaRocks
ci:
uses: ./.github/workflows/ci.yaml
publish:
needs: ci
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Publish to LuaRocks
uses: nvim-neorocks/luarocks-tag-release@v7
- uses: nvim-neorocks/luarocks-tag-release@v7
env:
LUAROCKS_API_KEY: ${{ secrets.LUAROCKS_API_KEY }}

View file

@ -1,4 +1,4 @@
name: Code Quality
name: quality
on:
pull_request:
@ -115,10 +115,10 @@ jobs:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Install dependencies with mypy
- name: Install dependencies with uv
run: uv sync --dev
- name: Type check Python files with mypy
run: uv run mypy .
- name: Type check Python files with ty
run: uvx ty check .
markdown-format:
name: Markdown Format Check

View file

@ -1,4 +1,4 @@
name: Tests
name: tests
on:
pull_request:
@ -35,21 +35,6 @@ jobs:
- 'pyproject.toml'
- 'uv.lock'
lua-test:
name: Lua Tests (${{ matrix.neovim_version }})
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.lua == 'true' }}
strategy:
matrix:
neovim_version: ['stable', 'nightly']
steps:
- uses: actions/checkout@v4
- name: Run Lua tests
uses: nvim-neorocks/nvim-busted-action@v1
with:
nvim_version: ${{ matrix.neovim_version }}
python-test:
name: Python Tests
runs-on: ubuntu-latest
@ -59,9 +44,7 @@ jobs:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Install dependencies with pytest
- name: Install dependencies
run: uv sync --dev
- name: Fetch camoufox data
run: uv run camoufox fetch
- name: Run Python tests
run: uv run pytest tests/ -v

14
.gitignore vendored
View file

@ -1,9 +1,19 @@
.venv/
.venv
venv
doc/tags
*.log
build
io
debug
venv/
create
.*cache*
CLAUDE.md
__pycache__
.claude/
node_modules/
.envrc
.direnv/

View file

@ -2,7 +2,7 @@ minimum_pre_commit_version: '3.5.0'
repos:
- repo: https://github.com/JohnnyMorganz/StyLua
rev: v2.1.0
rev: v2.3.1
hooks:
- id: stylua-github
name: stylua (Lua formatter)
@ -10,7 +10,7 @@ repos:
pass_filenames: true
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.9
rev: v0.14.3
hooks:
- id: ruff-format
name: ruff (format)
@ -20,18 +20,17 @@ repos:
args: ['--fix', '--select=I']
files: \.py$
- repo: local
hooks:
- id: mypy
name: mypy (type check)
entry: uv run mypy
language: system
args: ['.']
pass_filenames: false
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
rev: v4.0.0-alpha.8
hooks:
- id: prettier
name: prettier (format markdown)
files: \.md$
name: prettier
files: \.(md|toml|ya?ml|sh)$
- repo: local
hooks:
- id: ty-type-check
name: ty (Python type checker)
language: system
entry: uv run ty check
types: [python]

View file

@ -5,12 +5,11 @@
Scrape problems, run tests, and debug solutions across multiple platforms with
zero configuration.
https://github.com/user-attachments/assets/50b19481-8e6d-47b4-bebc-15e16c61a9c9
https://github.com/user-attachments/assets/e81d8dfb-578f-4a79-9989-210164fc0148
## Features
- **Multi-platform support**: AtCoder, Codeforces, CSES with consistent
interface
- **Multi-platform support**: AtCoder, CodeChef, Codeforces, and CSES
- **Automatic problem setup**: Scrape test cases and metadata in seconds
- **Dual view modes**: Lightweight I/O view for quick feedback, full panel for
detailed analysis
@ -20,11 +19,21 @@ https://github.com/user-attachments/assets/50b19481-8e6d-47b4-bebc-15e16c61a9c9
- **Language agnostic**: Works with any language
- **Diff viewer**: Compare expected vs actual output with 3 diff modes
## Optional Dependencies
## Installation
Install using your package manager of choice or via
[luarocks](https://luarocks.org/modules/barrettruth/cp.nvim):
```
luarocks install cp.nvim
```
## Dependencies
- [uv](https://docs.astral.sh/uv/) for problem scraping
- GNU [time](https://www.gnu.org/software/time/) and
[timeout](https://www.gnu.org/software/coreutils/manual/html_node/timeout-invocation.html)
- [uv](https://docs.astral.sh/uv/) or [nix](https://nixos.org/) for problem
scraping
## Quick Start
@ -69,9 +78,22 @@ cp.nvim follows a simple principle: **solve locally, submit remotely**.
```
See
[my config](https://github.com/barrett-ruth/dots/blob/main/nvim/lua/plugins/cp.lua)
[my config](https://github.com/barrettruth/dots/blob/main/.config/nvim/lua/plugins/cp.lua)
for the setup in the video shown above.
## Motivation
I could not find a neovim-centric, efficient, dependency-free, flexible, and
easily customizable competitive programming workflow that "just works"--so I
made it myself. I conferenced with top competitive programmers at Carnegie
Mellon Univerity and the University of Virginia and covered their (and my) pain
points:
- Scraping: contests are automatically loaded asynchronously
- Test Case Management: test case editor (`:CP edit`)
- UI: both `run` and `panel` layouts cover common formats
- Extensibility: snippet plugins, compilation, etc. are left to the programmer
## Similar Projects
- [competitest.nvim](https://github.com/xeluxee/competitest.nvim)

View file

@ -2,7 +2,7 @@ rockspec_format = '3.0'
package = 'cp.nvim'
version = 'scm-1'
source = { url = 'git://github.com/barrett-ruth/cp.nvim' }
source = { url = 'git://github.com/barrettruth/cp.nvim' }
build = { type = 'builtin' }
test_dependencies = {

View file

@ -18,6 +18,243 @@ REQUIREMENTS *cp-requirements*
- Unix-like operating system
- uv package manager (https://docs.astral.sh/uv/)
==============================================================================
SETUP *cp-setup*
Load cp.nvim with your package manager. For example, with lazy.nvim: >lua
{ 'barrettruth/cp.nvim' }
<
The plugin works automatically with no configuration required. For
customization, see |cp-config|.
==============================================================================
CONFIGURATION *cp-config*
Configuration is done via `vim.g.cp`. Set this before using the plugin:
>lua
vim.g.cp = {
languages = {
cpp = {
extension = 'cc',
commands = {
build = { 'g++', '-std=c++17', '{source}', '-o', '{binary}',
'-fdiagnostics-color=always' },
run = { '{binary}' },
debug = { 'g++', '-std=c++17', '-fsanitize=address,undefined',
'{source}', '-o', '{binary}' },
},
},
python = {
extension = 'py',
commands = {
run = { 'python', '{source}' },
debug = { 'python', '{source}' },
},
},
},
platforms = {
cses = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
overrides = {
cpp = { extension = 'cpp', commands = { build = { ... } } }
},
},
atcoder = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
codeforces = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
},
open_url = true,
debug = false,
ui = {
ansi = true,
run = {
width = 0.3,
next_test_key = '<c-n>', -- or nil to disable
prev_test_key = '<c-p>', -- or nil to disable
},
panel = {
diff_modes = { 'side-by-side', 'git', 'vim' },
max_output_lines = 50,
},
diff = {
git = {
args = { 'diff', '--no-index', '--word-diff=plain',
'--word-diff-regex=.', '--no-prefix' },
},
},
picker = 'telescope',
},
}
<
By default, C++ (g++ with ISO C++17) and Python are preconfigured under
'languages'. Platforms select which languages are enabled and which one is
the default; per-platform overrides can tweak 'extension' or 'commands'.
For example, to run CodeForces contests with Python by default:
>lua
vim.g.cp = {
platforms = {
codeforces = {
default_language = 'python',
},
},
}
<
Any language is supported provided the proper configuration. For example, to
run CSES problems with Rust using the single schema:
>lua
vim.g.cp = {
languages = {
rust = {
extension = 'rs',
commands = {
build = { 'rustc', '{source}', '-o', '{binary}' },
run = { '{binary}' },
},
},
},
platforms = {
cses = {
enabled_languages = { 'cpp', 'python', 'rust' },
default_language = 'rust',
},
},
}
<
*cp.Config*
Fields: ~
{languages} (table<string,|CpLanguage|>) Global language registry.
Each language provides an {extension} and {commands}.
{platforms} (table<string,|CpPlatform|>) Per-platform enablement,
default language, and optional overrides.
{hooks} (|cp.Hooks|) Hook functions called at various stages.
{debug} (boolean, default: false) Show info messages.
{scrapers} (string[]) Supported platform ids.
{filename} (function, optional)
function(contest, contest_id, problem_id, config, language): string
Should return full filename with extension.
(default: concatenates contest_id and problem_id, lowercased)
{ui} (|CpUI|) UI settings: panel, diff backend, picker.
{open_url} (boolean) Open the contest & problem url in the browser
when the contest is first opened.
*CpPlatform*
Fields: ~
{enabled_languages} (string[]) Language ids enabled on this platform.
{default_language} (string) One of {enabled_languages}.
{overrides} (table<string,|CpPlatformOverrides|>, optional)
Per-language overrides of {extension} and/or {commands}.
*CpLanguage*
Fields: ~
{extension} (string) File extension without leading dot.
{commands} (|CpLangCommands|) Command templates.
*CpLangCommands*
Fields: ~
{build} (string[], optional) For compiled languages.
Must include {source} and {binary}.
{run} (string[], optional) Runtime command.
Compiled: must include {binary}.
Interpreted: must include {source}.
{debug} (string[], optional) Debug variant; same token rules
as {build} (compiled) or {run} (interpreted).
*CpUI*
Fields: ~
{ansi} (boolean, default: true) Enable ANSI color parsing
and highlighting in both I/O view and panel.
{run} (|RunConfig|) I/O view configuration.
{panel} (|PanelConfig|) Test panel behavior configuration.
{diff} (|DiffConfig|) Diff backend configuration.
{picker} (string|nil) 'telescope', 'fzf-lua', or nil.
*RunConfig*
Fields: ~
{width} (number, default: 0.3) Width of I/O view splits as
fraction of screen (0.0 to 1.0).
{next_test_key} (string|nil, default: '<c-n>') Keymap to navigate
to next test in I/O view. Set to nil to disable.
{prev_test_key} (string|nil, default: '<c-p>') Keymap to navigate
to previous test in I/O view. Set to nil to disable.
{format_verdict} (|VerdictFormatter|, default: nil) Custom verdict line
formatter. See |cp-verdict-format|.
*EditConfig*
Fields: ~
{next_test_key} (string|nil, default: ']t') Jump to next test.
{prev_test_key} (string|nil, default: '[t') Jump to previous test.
{delete_test_key} (string|nil, default: 'gd') Delete current test.
{add_test_key} (string|nil, default: 'ga') Add new test.
{save_and_exit_key} (string|nil, default: 'q') Save and exit editor.
All keys are nil-able. Set to nil to disable.
*cp.PanelConfig*
Fields: ~
{diff_modes} (string[], default: {'side-by-side', 'git', 'vim'})
List of diff modes to cycle through with 't' key.
First element is the default mode.
Valid modes: 'side-by-side', 'git', 'vim'.
{max_output_lines} (number, default: 50) Maximum lines of test output.
*cp.DiffConfig*
Fields: ~
{git} (|cp.DiffGitConfig|) Git diff backend configuration.
*cp.DiffGitConfig*
Fields: ~
{args} (string[]) Command-line arguments for git diff.
Default: { 'diff', '--no-index', '--word-diff=plain',
'--word-diff-regex=.', '--no-prefix' }
• --no-index: Compare files outside git repository
• --word-diff=plain: Character-level diff markers
• --word-diff-regex=.: Split on every character
• --no-prefix: Remove a/ b/ prefixes from output
*cp.Hooks*
Fields: ~
{before_run} (function, optional) Called before test panel opens.
function(state: cp.State)
{before_debug} (function, optional) Called before debug build/run.
function(state: cp.State)
{setup_code} (function, optional) Called after source file is opened.
function(state: cp.State)
{setup_io_input} (function, optional) Called when I/O input buffer created.
function(bufnr: integer, state: cp.State)
Default: helpers.clearcol (removes line numbers/columns)
{setup_io_output} (function, optional) Called when I/O output buffer created.
function(bufnr: integer, state: cp.State)
Default: helpers.clearcol (removes line numbers/columns)
Hook functions receive the cp.nvim state object (|cp.State|). See
|lua/cp/state.lua| for available methods and fields.
The I/O buffer hooks are called once when the buffers are first created
during problem setup. Use these to customize buffer appearance (e.g.,
remove line numbers, set custom options). Access helpers via:
>lua
local helpers = require('cp').helpers
<
Example usage:
>lua
hooks = {
setup_code = function(state)
print("Setting up " .. state.get_base_name())
print("Source file: " .. state.get_source_file())
end,
setup_io_input = function(bufnr, state)
vim.api.nvim_set_option_value('number', false, { buf = bufnr })
end
}
<
==============================================================================
COMMANDS *cp-commands*
@ -34,15 +271,30 @@ COMMANDS *cp-commands*
:CP codeforces 1933 --lang python
<
View Commands ~
:CP run [--debug] [n]
:CP run [all|n|n,m,...] [--debug]
Run tests in I/O view (see |cp-io-view|).
Lightweight split showing test verdicts.
Without [n]: runs all tests, shows verdict summary
With [n]: runs test n, shows detailed output
Execution modes:
• :CP run Combined: single execution with all tests
(auto-switches to individual when multiple samples)
• :CP run all Individual: N separate executions
• :CP run n Individual: run test n only
• :CP run n,m,... Individual: run specific tests (e.g. nth and mth)
--debug: Use debug build (builds to build/<name>.dbg)
Combined mode runs all test inputs in one execution (matching
platform behavior for multi-test problems). When a problem has
multiple independent sample test cases, :CP run auto-switches to
individual mode to run each sample separately.
Examples: >
:CP run " All tests
:CP run --debug 2 " Test 2, debug build
:CP run " Combined: all tests, one execution
:CP run all " Individual: all tests, N executions
:CP run 2 " Individual: test 2 only
:CP run 1,3,5 " Individual: tests 1, 3, and 5
:CP run all --debug " Individual with debug build
<
:CP panel [--debug] [n]
Open full-screen test panel (see |cp-panel|).
@ -188,235 +440,40 @@ Debug Builds ~
<
==============================================================================
CONFIGURATION *cp-config*
MAPPINGS *cp-mappings*
Here's an example configuration with lazy.nvim:
>lua
{
'barrett-ruth/cp.nvim',
cmd = 'CP',
build = 'uv sync',
opts = {
languages = {
cpp = {
extension = 'cc',
commands = {
build = { 'g++', '-std=c++17', '{source}', '-o', '{binary}',
'-fdiagnostics-color=always' },
run = { '{binary}' },
debug = { 'g++', '-std=c++17', '-fsanitize=address,undefined',
'{source}', '-o', '{binary}' },
},
},
python = {
extension = 'py',
commands = {
run = { 'python', '{source}' },
debug = { 'python', '{source}' },
},
},
},
platforms = {
cses = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
overrides = {
cpp = { extension = 'cpp', commands = { build = { ... } } }
},
},
atcoder = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
codeforces = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
},
open_url = true,
debug = false,
ui = {
ansi = true,
run = {
width = 0.3,
next_test_key = '<c-n>', -- or nil to disable
prev_test_key = '<c-p>', -- or nil to disable
},
panel = {
diff_mode = 'vim',
max_output_lines = 50,
},
diff = {
git = {
args = { 'diff', '--no-index', '--word-diff=plain',
'--word-diff-regex=.', '--no-prefix' },
},
},
picker = 'telescope',
},
}
}
<
cp.nvim provides <Plug> mappings for all primary actions. These dispatch
through the same code path as |:CP|.
By default, C++ (g++ with ISO C++17) and Python are preconfigured under
'languages'. Platforms select which languages are enabled and which one is
the default; per-platform overrides can tweak 'extension' or 'commands'.
*<Plug>(cp-run)*
<Plug>(cp-run) Run tests in I/O view. Equivalent to :CP run.
For example, to run CodeForces contests with Python by default:
>lua
{
platforms = {
codeforces = {
default_language = 'python',
},
},
}
<
Any language is supported provided the proper configuration. For example, to
run CSES problems with Rust using the single schema:
>lua
{
languages = {
rust = {
extension = 'rs',
commands = {
build = { 'rustc', '{source}', '-o', '{binary}' },
run = { '{binary}' },
},
},
},
platforms = {
cses = {
enabled_languages = { 'cpp', 'python', 'rust' },
default_language = 'rust',
},
},
}
<
*cp.Config*
Fields: ~
{languages} (table<string,|CpLanguage|>) Global language registry.
Each language provides an {extension} and {commands}.
{platforms} (table<string,|CpPlatform|>) Per-platform enablement,
default language, and optional overrides.
{hooks} (|cp.Hooks|) Hook functions called at various stages.
{debug} (boolean, default: false) Show info messages.
{scrapers} (string[]) Supported platform ids.
{filename} (function, optional)
function(contest, contest_id, problem_id, config, language): string
Should return full filename with extension.
(default: concatenates contest_id and problem_id, lowercased)
{ui} (|CpUI|) UI settings: panel, diff backend, picker.
{open_url} (boolean) Open the contest & problem url in the browser
when the contest is first opened.
*<Plug>(cp-panel)*
<Plug>(cp-panel) Open full-screen test panel. Equivalent to :CP panel.
*CpPlatform*
Fields: ~
{enabled_languages} (string[]) Language ids enabled on this platform.
{default_language} (string) One of {enabled_languages}.
{overrides} (table<string,|CpPlatformOverrides|>, optional)
Per-language overrides of {extension} and/or {commands}.
*<Plug>(cp-edit)*
<Plug>(cp-edit) Open the test case editor. Equivalent to :CP edit.
*CpLanguage*
Fields: ~
{extension} (string) File extension without leading dot.
{commands} (|CpLangCommands|) Command templates.
*<Plug>(cp-next)*
<Plug>(cp-next) Navigate to the next problem. Equivalent to :CP next.
*CpLangCommands*
Fields: ~
{build} (string[], optional) For compiled languages.
Must include {source} and {binary}.
{run} (string[], optional) Runtime command.
Compiled: must include {binary}.
Interpreted: must include {source}.
{debug} (string[], optional) Debug variant; same token rules
as {build} (compiled) or {run} (interpreted).
*<Plug>(cp-prev)*
<Plug>(cp-prev) Navigate to the previous problem. Equivalent to :CP prev.
*CpUI*
Fields: ~
{ansi} (boolean, default: true) Enable ANSI color parsing
and highlighting in both I/O view and panel.
{run} (|RunConfig|) I/O view configuration.
{panel} (|PanelConfig|) Test panel behavior configuration.
{diff} (|DiffConfig|) Diff backend configuration.
{picker} (string|nil) 'telescope', 'fzf-lua', or nil.
*<Plug>(cp-pick)*
<Plug>(cp-pick) Launch the contest picker. Equivalent to :CP pick.
*RunConfig*
Fields: ~
{width} (number, default: 0.3) Width of I/O view splits as
fraction of screen (0.0 to 1.0).
{next_test_key} (string|nil, default: '<c-n>') Keymap to navigate
to next test in I/O view. Set to nil to disable.
{prev_test_key} (string|nil, default: '<c-p>') Keymap to navigate
to previous test in I/O view. Set to nil to disable.
{format_verdict} (|VerdictFormatter|, default: nil) Custom verdict line
formatter. See |cp-verdict-format|.
*<Plug>(cp-interact)*
<Plug>(cp-interact) Open interactive mode. Equivalent to :CP interact.
*EditConfig*
Fields: ~
{next_test_key} (string|nil, default: ']t') Jump to next test.
{prev_test_key} (string|nil, default: '[t') Jump to previous test.
{delete_test_key} (string|nil, default: 'gd') Delete current test.
{add_test_key} (string|nil, default: 'ga') Add new test.
{save_and_exit_key} (string|nil, default: 'q') Save and exit editor.
All keys are nil-able. Set to nil to disable.
*cp.PanelConfig*
Fields: ~
{diff_mode} (string, default: "none") Diff backend: "none",
"vim", or "git".
{max_output_lines} (number, default: 50) Maximum lines of test output.
*cp.DiffConfig*
Fields: ~
{git} (|cp.DiffGitConfig|) Git diff backend configuration.
*cp.DiffGitConfig*
Fields: ~
{args} (string[]) Command-line arguments for git diff.
Default: { 'diff', '--no-index', '--word-diff=plain',
'--word-diff-regex=.', '--no-prefix' }
• --no-index: Compare files outside git repository
• --word-diff=plain: Character-level diff markers
• --word-diff-regex=.: Split on every character
• --no-prefix: Remove a/ b/ prefixes from output
*cp.Hooks*
Fields: ~
{before_run} (function, optional) Called before test panel opens.
function(state: cp.State)
{before_debug} (function, optional) Called before debug build/run.
function(state: cp.State)
{setup_code} (function, optional) Called after source file is opened.
function(state: cp.State)
{setup_io_input} (function, optional) Called when I/O input buffer created.
function(bufnr: integer, state: cp.State)
Default: helpers.clearcol (removes line numbers/columns)
{setup_io_output} (function, optional) Called when I/O output buffer created.
function(bufnr: integer, state: cp.State)
Default: helpers.clearcol (removes line numbers/columns)
Hook functions receive the cp.nvim state object (|cp.State|). See
|lua/cp/state.lua| for available methods and fields.
The I/O buffer hooks are called once when the buffers are first created
during problem setup. Use these to customize buffer appearance (e.g.,
remove line numbers, set custom options). Access helpers via:
>lua
local helpers = require('cp').helpers
<
Example usage:
>lua
hooks = {
setup_code = function(state)
print("Setting up " .. state.get_base_name())
print("Source file: " .. state.get_source_file())
end,
setup_io_input = function(bufnr, state)
-- Custom setup for input buffer
vim.api.nvim_set_option_value('number', false, { buf = bufnr })
end
}
Example configuration: >lua
vim.keymap.set('n', '<leader>cr', '<Plug>(cp-run)')
vim.keymap.set('n', '<leader>cp', '<Plug>(cp-panel)')
vim.keymap.set('n', '<leader>ce', '<Plug>(cp-edit)')
vim.keymap.set('n', '<leader>cn', '<Plug>(cp-next)')
vim.keymap.set('n', '<leader>cN', '<Plug>(cp-prev)')
vim.keymap.set('n', '<leader>cc', '<Plug>(cp-pick)')
vim.keymap.set('n', '<leader>ci', '<Plug>(cp-interact)')
<
==============================================================================
@ -536,10 +593,27 @@ Example: Setting up and solving AtCoder contest ABC324
I/O VIEW *cp-io-view*
The I/O view provides lightweight test feedback in persistent side splits.
All test outputs are concatenated with verdict summaries at the bottom.
Test outputs are concatenated with verdict summaries at the bottom.
The |cp-panel| offers more fine-grained analysis with diff modes.
Access the I/O view with :CP run [n]
Execution Modes ~
The I/O view supports two execution modes:
Combined Mode (:CP run with single sample)
• Single execution with all test inputs concatenated
• Matches platform behavior (e.g. Codeforces multi-test format)
• Shows one verdict for the entire execution
• Input split: All test inputs concatenated
• Output split: Single program output + verdict
• Used when problem has one sample containing multiple test cases
Individual Mode (:CP run all / :CP run n / :CP run n,m,...)
• Separate execution for each test case
• Per-test verdicts for debugging
• Input split: Selected test inputs concatenated
• Output split: All test outputs concatenated + per-test verdicts
• Auto-selected when problem has multiple independent samples
Layout ~
@ -552,7 +626,7 @@ The I/O view appears as 30% width splits on the right side: >
│ │ 7 714 │
│ Solution Code │ │
│ │ Test 1: WA | 212.07/2000 ms | 1/512 MB |...│
│ │ Test 2: WA | 81.94/2000 ms | 1/512 MB |...│
│ │ Test 2: WA | 81.94/2000 ms | 1/512 MB |...│
│ ├─────────────────────────────────────────────┤
│ │ Input (Bottom Split) │
│ │ 1 2 3 │
@ -561,7 +635,7 @@ The I/O view appears as 30% width splits on the right side: >
└──────────────────────────┴─────────────────────────────────────────────┘
<
The output split shows:
1. Concatenated test outputs (separated by blank lines)
1. Program output (raw, preserving all formatting)
2. Space-aligned verdict summary with:
- Test number and status (AC/WA/TLE/MLE/RTE with color highlighting)
- Runtime: actual/limit in milliseconds
@ -570,8 +644,10 @@ The output split shows:
Usage ~
:CP run Run all tests
:CP run 3 Run test 3 only
:CP run Combined mode: all tests in one execution
:CP run all Individual mode: all tests separately
:CP run 3 Individual mode: test 3 only
:CP run 1,3,5 Individual mode: specific tests (1, 3, and 5)
Navigation ~
@ -750,12 +826,15 @@ HIGHLIGHT GROUPS *cp-highlights*
Test Status Groups ~
CpTestAC Green foreground for AC status
CpTestWA Red foreground for WA status
CpTestTLE Orange foreground for TLE status
CpTestMLE Orange foreground for MLE status
CpTestRTE Purple foreground for RTE status
CpTestNA Gray foreground for remaining state
All test status groups link to builtin highlight groups, automatically adapting
to your colorscheme:
CpTestAC Links to DiagnosticOk (AC status)
CpTestWA Links to DiagnosticError (WA status)
CpTestTLE Links to DiagnosticWarn (TLE status)
CpTestMLE Links to DiagnosticWarn (MLE status)
CpTestRTE Links to DiagnosticHint (RTE status)
CpTestNA Links to Comment (pending/unknown status)
ANSI Color Groups ~
@ -814,17 +893,20 @@ PANEL KEYMAPS *cp-panel-keys*
<c-n> Navigate to next test case
<c-p> Navigate to previous test case
t Cycle through diff modes: none → git → vim
t Cycle through configured diff modes (see |cp.PanelConfig|)
q Exit panel and restore layout
<c-q> Exit interactive terminal and restore layout
Diff Modes ~
Three diff backends are available:
Three diff modes are available:
none Nothing
vim Built-in vim diff (default, always available)
git Character-level git word-diff (requires git, more precise)
side-by-side Expected and actual output shown side-by-side (default)
vim Built-in vim diff (always available)
git Character-level git word-diff (requires git, more precise)
Configure which modes to cycle through via |cp.PanelConfig|.diff_modes.
The first element is used as the default mode.
The git backend shows character-level changes with [-removed-] and {+added+}
markers.

43
flake.lock generated Normal file
View file

@ -0,0 +1,43 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1771008912,
"narHash": "sha256-gf2AmWVTs8lEq7z/3ZAsgnZDhWIckkb+ZnAo5RzSxJg=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a82ccc39b39b621151d6732718e3e250109076fa",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs",
"systems": "systems"
}
},
"systems": {
"locked": {
"lastModified": 1689347949,
"narHash": "sha256-12tWmuL2zgBgZkdoB6qXZsgJEH9LR3oUgpaQq2RbI80=",
"owner": "nix-systems",
"repo": "default-linux",
"rev": "31732fcf5e8fea42e59c2488ad31a0e651500f68",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default-linux",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

72
flake.nix Normal file
View file

@ -0,0 +1,72 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
systems.url = "github:nix-systems/default-linux";
};
outputs =
{
self,
nixpkgs,
systems,
}:
let
eachSystem = nixpkgs.lib.genAttrs (import systems);
pkgsFor = system: nixpkgs.legacyPackages.${system};
mkPythonEnv =
pkgs:
pkgs.python312.withPackages (ps: [
ps.backoff
ps.beautifulsoup4
ps.curl-cffi
ps.httpx
ps.ndjson
ps.pydantic
ps.requests
]);
mkPlugin =
pkgs:
let
pythonEnv = mkPythonEnv pkgs;
in
pkgs.vimUtils.buildVimPlugin {
pname = "cp-nvim";
version = "0-unstable-${self.shortRev or self.dirtyShortRev or "dev"}";
src = self;
postPatch = ''
substituteInPlace lua/cp/utils.lua \
--replace-fail "local _nix_python = nil" \
"local _nix_python = '${pythonEnv.interpreter}'"
'';
nvimSkipModule = [
"cp.pickers.telescope"
"cp.version"
];
passthru = { inherit pythonEnv; };
meta.description = "Competitive programming plugin for Neovim";
};
in
{
overlays.default = final: prev: {
vimPlugins = prev.vimPlugins // {
cp-nvim = mkPlugin final;
};
};
packages = eachSystem (system: {
default = mkPlugin (pkgsFor system);
pythonEnv = mkPythonEnv (pkgsFor system);
});
devShells = eachSystem (system: {
default = (pkgsFor system).mkShell {
packages = with (pkgsFor system); [
uv
python312
];
};
});
};
}

View file

@ -16,12 +16,18 @@
---@field name string
---@field id string
---@class CombinedTest
---@field input string
---@field expected string
---@class Problem
---@field id string
---@field name? string
---@field interactive? boolean
---@field multi_test? boolean
---@field memory_mb? number
---@field timeout_ms? number
---@field combined_test? CombinedTest
---@field test_cases TestCase[]
---@class TestCase
@ -180,38 +186,64 @@ function M.get_test_cases(platform, contest_id, problem_id)
return cache_data[platform][contest_id].problems[index].test_cases or {}
end
---@param platform string
---@param contest_id string
---@param problem_id? string
---@return CombinedTest?
function M.get_combined_test(platform, contest_id, problem_id)
if
not cache_data[platform]
or not cache_data[platform][contest_id]
or not cache_data[platform][contest_id].problems
or not cache_data[platform][contest_id].index_map
then
return nil
end
local index = cache_data[platform][contest_id].index_map[problem_id]
return cache_data[platform][contest_id].problems[index].combined_test
end
---@param platform string
---@param contest_id string
---@param problem_id string
---@param combined_test? CombinedTest
---@param test_cases TestCase[]
---@param timeout_ms number
---@param memory_mb number
---@param interactive boolean
---@param multi_test boolean
function M.set_test_cases(
platform,
contest_id,
problem_id,
combined_test,
test_cases,
timeout_ms,
memory_mb,
interactive
interactive,
multi_test
)
vim.validate({
platform = { platform, 'string' },
contest_id = { contest_id, 'string' },
problem_id = { problem_id, { 'string', 'nil' }, true },
combined_test = { combined_test, { 'table', 'nil' }, true },
test_cases = { test_cases, 'table' },
timeout_ms = { timeout_ms, { 'number', 'nil' }, true },
memory_mb = { memory_mb, { 'number', 'nil' }, true },
interactive = { interactive, { 'boolean', 'nil' }, true },
multi_test = { multi_test, { 'boolean', 'nil' }, true },
})
local index = cache_data[platform][contest_id].index_map[problem_id]
cache_data[platform][contest_id].problems[index].combined_test = combined_test
cache_data[platform][contest_id].problems[index].test_cases = test_cases
cache_data[platform][contest_id].problems[index].timeout_ms = timeout_ms
cache_data[platform][contest_id].problems[index].memory_mb = memory_mb
cache_data[platform][contest_id].problems[index].interactive = interactive
cache_data[platform][contest_id].problems[index].multi_test = multi_test
M.save()
end

View file

@ -17,8 +17,11 @@ local actions = constants.ACTIONS
---@field problem_id? string
---@field interactor_cmd? string
---@field test_index? integer
---@field test_indices? integer[]
---@field mode? string
---@field debug? boolean
---@field language? string
---@field subcommand? string
--- Turn raw args into normalized structure to later dispatch
---@param args string[] The raw command-line mode args
@ -75,51 +78,120 @@ local function parse_command(args)
return { type = 'action', action = 'edit', test_index = test_index }
elseif first == 'run' or first == 'panel' then
local debug = false
local test_index = nil
local test_indices = nil
local mode = 'combined'
if #args == 2 then
if args[2] == '--debug' then
debug = true
elseif args[2] == 'all' then
mode = 'individual'
else
if args[2]:find(',') then
local indices = {}
for num in args[2]:gmatch('[^,]+') do
local idx = tonumber(num)
if not idx or idx < 1 or idx ~= math.floor(idx) then
return {
type = 'error',
message = ("Invalid test index '%s' in list"):format(num),
}
end
table.insert(indices, idx)
end
if #indices == 0 then
return { type = 'error', message = 'No valid test indices provided' }
end
test_indices = indices
mode = 'individual'
else
local idx = tonumber(args[2])
if not idx then
return {
type = 'error',
message = ("Invalid argument '%s': expected test number(s), 'all', or --debug"):format(
args[2]
),
}
end
if idx < 1 or idx ~= math.floor(idx) then
return { type = 'error', message = ("'%s' is not a valid test index"):format(idx) }
end
test_indices = { idx }
mode = 'individual'
end
end
elseif #args == 3 then
if args[2] == 'all' then
mode = 'individual'
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
debug = true
elseif args[2]:find(',') then
local indices = {}
for num in args[2]:gmatch('[^,]+') do
local idx = tonumber(num)
if not idx or idx < 1 or idx ~= math.floor(idx) then
return {
type = 'error',
message = ("Invalid test index '%s' in list"):format(num),
}
end
table.insert(indices, idx)
end
if #indices == 0 then
return { type = 'error', message = 'No valid test indices provided' }
end
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
test_indices = indices
mode = 'individual'
debug = true
else
local idx = tonumber(args[2])
if not idx then
return {
type = 'error',
message = ("Invalid argument '%s': expected test number or --debug"):format(args[2]),
message = ("Invalid argument '%s': expected test number"):format(args[2]),
}
end
if idx < 1 or idx ~= math.floor(idx) then
return { type = 'error', message = ("'%s' is not a valid test index"):format(idx) }
end
test_index = idx
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
test_indices = { idx }
mode = 'individual'
debug = true
end
elseif #args == 3 then
local idx = tonumber(args[2])
if not idx then
return {
type = 'error',
message = ("Invalid argument '%s': expected test number"):format(args[2]),
}
end
if idx < 1 or idx ~= math.floor(idx) then
return { type = 'error', message = ("'%s' is not a valid test index"):format(idx) }
end
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
test_index = idx
debug = true
elseif #args > 3 then
return {
type = 'error',
message = 'Too many arguments. Usage: :CP ' .. first .. ' [test_num] [--debug]',
message = 'Too many arguments. Usage: :CP '
.. first
.. ' [all|test_num[,test_num...]] [--debug]',
}
end
return { type = 'action', action = first, test_index = test_index, debug = debug }
return {
type = 'action',
action = first,
test_indices = test_indices,
debug = debug,
mode = mode,
}
else
local language = nil
if #args >= 3 and args[2] == '--lang' then
@ -197,9 +269,12 @@ function M.handle_command(opts)
if cmd.action == 'interact' then
ui.toggle_interactive(cmd.interactor_cmd)
elseif cmd.action == 'run' then
ui.run_io_view(cmd.test_index, cmd.debug)
ui.run_io_view(cmd.test_indices, cmd.debug, cmd.mode)
elseif cmd.action == 'panel' then
ui.toggle_panel({ debug = cmd.debug, test_index = cmd.test_index })
ui.toggle_panel({
debug = cmd.debug,
test_index = cmd.test_indices and cmd.test_indices[1] or nil,
})
elseif cmd.action == 'next' then
setup.navigate_problem(1, cmd.language)
elseif cmd.action == 'prev' then

View file

@ -40,7 +40,7 @@ function M.handle_pick_action(language)
local ok, _ = pcall(require, 'fzf-lua')
if not ok then
logger.log(
'fzf-lua is not available. Install fzf-lua xor change your picker config',
'fzf-lua is not available. Install fzf-lua or change your picker config',
vim.log.levels.ERROR
)
return

View file

@ -18,7 +18,7 @@
---@field overrides? table<string, CpPlatformOverrides>
---@class PanelConfig
---@field diff_mode "none"|"vim"|"git"
---@field diff_modes string[]
---@field max_output_lines integer
---@class DiffGitConfig
@ -139,6 +139,10 @@ M.defaults = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
codechef = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
},
cses = {
enabled_languages = { 'cpp', 'python' },
default_language = 'cpp',
@ -169,7 +173,7 @@ M.defaults = {
add_test_key = 'ga',
save_and_exit_key = 'q',
},
panel = { diff_mode = 'none', max_output_lines = 50 },
panel = { diff_modes = { 'side-by-side', 'git', 'vim' }, max_output_lines = 50 },
diff = {
git = {
args = { 'diff', '--no-index', '--word-diff=plain', '--word-diff-regex=.', '--no-prefix' },
@ -288,7 +292,15 @@ end
---@return cp.Config
function M.setup(user_config)
vim.validate({ user_config = { user_config, { 'table', 'nil' }, true } })
local cfg = vim.tbl_deep_extend('force', vim.deepcopy(M.defaults), user_config or {})
local defaults = vim.deepcopy(M.defaults)
if user_config and user_config.platforms then
for plat in pairs(defaults.platforms) do
if not user_config.platforms[plat] then
defaults.platforms[plat] = nil
end
end
end
local cfg = vim.tbl_deep_extend('force', defaults, user_config or {})
if not next(cfg.languages) then
error('[cp.nvim] At least one language must be configured')
@ -301,7 +313,24 @@ function M.setup(user_config)
vim.validate({
hooks = { cfg.hooks, { 'table' } },
ui = { cfg.ui, { 'table' } },
debug = { cfg.debug, { 'boolean', 'nil' }, true },
open_url = { cfg.open_url, { 'boolean', 'nil' }, true },
filename = { cfg.filename, { 'function', 'nil' }, true },
scrapers = {
cfg.scrapers,
function(v)
if type(v) ~= 'table' then
return false
end
for _, s in ipairs(v) do
if not vim.tbl_contains(constants.PLATFORMS, s) then
return false
end
end
return true
end,
('one of {%s}'):format(table.concat(constants.PLATFORMS, ',')),
},
before_run = { cfg.hooks.before_run, { 'function', 'nil' }, true },
before_debug = { cfg.hooks.before_debug, { 'function', 'nil' }, true },
setup_code = { cfg.hooks.setup_code, { 'function', 'nil' }, true },
@ -309,14 +338,23 @@ function M.setup(user_config)
setup_io_output = { cfg.hooks.setup_io_output, { 'function', 'nil' }, true },
})
local layouts = require('cp.ui.layouts')
vim.validate({
ansi = { cfg.ui.ansi, 'boolean' },
diff_mode = {
cfg.ui.panel.diff_mode,
diff_modes = {
cfg.ui.panel.diff_modes,
function(v)
return vim.tbl_contains({ 'none', 'vim', 'git' }, v)
if type(v) ~= 'table' then
return false
end
for _, mode in ipairs(v) do
if not layouts.DIFF_MODES[mode] then
return false
end
end
return true
end,
"diff_mode must be 'none', 'vim', or 'git'",
('one of {%s}'):format(table.concat(vim.tbl_keys(layouts.DIFF_MODES), ',')),
},
max_output_lines = {
cfg.ui.panel.max_output_lines,
@ -326,6 +364,14 @@ function M.setup(user_config)
'positive integer',
},
git = { cfg.ui.diff.git, { 'table' } },
git_args = { cfg.ui.diff.git.args, is_string_list, 'string[]' },
width = {
cfg.ui.run.width,
function(v)
return type(v) == 'number' and v > 0 and v <= 1
end,
'decimal between 0 and 1',
},
next_test_key = {
cfg.ui.run.next_test_key,
function(v)
@ -379,6 +425,13 @@ function M.setup(user_config)
end,
'nil or non-empty string',
},
picker = {
cfg.ui.picker,
function(v)
return v == nil or v == 'telescope' or v == 'fzf-lua'
end,
"nil, 'telescope', or 'fzf-lua'",
},
})
for id, lang in pairs(cfg.languages) do
@ -439,7 +492,18 @@ function M.get_language_for_platform(platform_id, language_id)
}
end
local effective = cfg.runtime.effective[platform_id][language_id]
local platform_effective = cfg.runtime.effective[platform_id]
if not platform_effective then
return {
valid = false,
error = string.format(
'No runtime config for platform %s (plugin not initialized)',
platform_id
),
}
end
local effective = platform_effective[language_id]
if not effective then
return {
valid = false,

View file

@ -1,10 +1,11 @@
local M = {}
M.PLATFORMS = { 'atcoder', 'codeforces', 'cses' }
M.PLATFORMS = { 'atcoder', 'codechef', 'codeforces', 'cses' }
M.ACTIONS = { 'run', 'panel', 'next', 'prev', 'pick', 'cache', 'interact', 'edit' }
M.PLATFORM_DISPLAY_NAMES = {
atcoder = 'AtCoder',
codechef = 'CodeChef',
codeforces = 'CodeForces',
cses = 'CSES',
}

View file

@ -5,6 +5,8 @@ local utils = require('cp.utils')
local function check()
vim.health.start('cp.nvim [required] ~')
utils.setup_python_env()
if vim.fn.has('nvim-0.10.0') == 1 then
vim.health.ok('Neovim 0.10.0+ detected')
else
@ -16,22 +18,37 @@ local function check()
vim.health.error('Windows is not supported')
end
if vim.fn.executable('uv') == 1 then
vim.health.ok('uv executable found')
local r = vim.system({ 'uv', '--version' }, { text = true }):wait()
if utils.is_nix_build() then
local source = utils.is_nix_discovered() and 'runtime discovery' or 'flake install'
vim.health.ok('Nix Python environment detected (' .. source .. ')')
local py = utils.get_nix_python()
vim.health.info('Python: ' .. py)
local r = vim.system({ py, '--version' }, { text = true }):wait()
if r.code == 0 then
vim.health.info('uv version: ' .. r.stdout:gsub('\n', ''))
vim.health.info('Python version: ' .. r.stdout:gsub('\n', ''))
end
else
vim.health.warn('uv not found (install https://docs.astral.sh/uv/ for scraping)')
end
if vim.fn.executable('uv') == 1 then
vim.health.ok('uv executable found')
local r = vim.system({ 'uv', '--version' }, { text = true }):wait()
if r.code == 0 then
vim.health.info('uv version: ' .. r.stdout:gsub('\n', ''))
end
else
vim.health.warn('uv not found (install https://docs.astral.sh/uv/ for scraping)')
end
local plugin_path = utils.get_plugin_path()
local venv_dir = plugin_path .. '/.venv'
if vim.fn.isdirectory(venv_dir) == 1 then
vim.health.ok('Python virtual environment found at ' .. venv_dir)
else
vim.health.info('Python virtual environment not set up (created on first scrape)')
if vim.fn.executable('nix') == 1 then
vim.health.info('nix available but Python environment not resolved via nix')
end
local plugin_path = utils.get_plugin_path()
local venv_dir = plugin_path .. '/.venv'
if vim.fn.isdirectory(venv_dir) == 1 then
vim.health.ok('Python virtual environment found at ' .. venv_dir)
else
vim.health.info('Python virtual environment not set up (created on first scrape)')
end
end
local time_cap = utils.time_capability()
@ -41,7 +58,7 @@ local function check()
vim.health.error('GNU time not found: ' .. (time_cap.reason or ''))
end
local timeout_cap = utils.time_capability()
local timeout_cap = utils.timeout_capability()
if timeout_cap.ok then
vim.health.ok('GNU timeout found: ' .. timeout_cap.path)
else

View file

@ -11,27 +11,44 @@ if vim.fn.has('nvim-0.10.0') == 0 then
return {}
end
local user_config = {}
local config = nil
local initialized = false
local function ensure_initialized()
if initialized then
return true
end
local user_config = vim.g.cp or {}
local ok, result = pcall(config_module.setup, user_config)
if not ok then
local msg = tostring(result):gsub('^.+:%d+: ', '')
vim.notify(msg, vim.log.levels.ERROR)
return false
end
config_module.set_current_config(result)
initialized = true
return true
end
---@return nil
function M.handle_command(opts)
if not ensure_initialized() then
return
end
local commands = require('cp.commands')
commands.handle_command(opts)
end
function M.setup(opts)
opts = opts or {}
user_config = opts
config = config_module.setup(user_config)
config_module.set_current_config(config)
initialized = true
end
function M.is_initialized()
return initialized
end
---@deprecated Use `vim.g.cp` instead
function M.setup(user_config)
vim.deprecate('require("cp").setup()', 'vim.g.cp', 'v0.7.7', 'cp.nvim', false)
if user_config then
vim.g.cp = vim.tbl_deep_extend('force', vim.g.cp or {}, user_config)
end
end
return M

View file

@ -51,8 +51,6 @@ local function contest_picker(platform, refresh, language)
end
end,
['ctrl-r'] = function()
local cache = require('cp.cache')
cache.clear_contest_list(platform)
contest_picker(platform, true, language)
end,
},

View file

@ -39,24 +39,28 @@ end
---@param compile_cmd string[]
---@param substitutions SubstitutableCommand
function M.compile(compile_cmd, substitutions)
---@param on_complete fun(r: {code: integer, stdout: string})
function M.compile(compile_cmd, substitutions, on_complete)
local cmd = substitute_template(compile_cmd, substitutions)
local sh = table.concat(cmd, ' ') .. ' 2>&1'
logger.log('compile: ' .. sh)
local t0 = vim.uv.hrtime()
local r = vim.system({ 'sh', '-c', sh }, { text = false }):wait()
local dt = (vim.uv.hrtime() - t0) / 1e6
vim.system({ 'sh', '-c', sh }, { text = false }, function(r)
local dt = (vim.uv.hrtime() - t0) / 1e6
local ansi = require('cp.ui.ansi')
r.stdout = ansi.bytes_to_string(r.stdout or '')
local ansi = require('cp.ui.ansi')
r.stdout = ansi.bytes_to_string(r.stdout or '')
if r.code == 0 then
logger.log(('Compilation successful in %.1fms.'):format(dt), vim.log.levels.INFO)
else
logger.log(('Compilation failed in %.1fms.'):format(dt))
end
if r.code == 0 then
logger.log(('Compilation successful in %.1fms.'):format(dt), vim.log.levels.INFO)
else
logger.log(('Compilation failed in %.1fms.'):format(dt))
end
return r
vim.schedule(function()
on_complete(r)
end)
end)
end
local function parse_and_strip_time_v(output)
@ -73,13 +77,19 @@ local function parse_and_strip_time_v(output)
return s, 0
end
local k = last_i - 1
while k >= 1 do
local ch = s:sub(k, k)
if ch ~= ' ' and ch ~= '\t' then
break
local tab_before_marker = s:find('\t[^\t]*Command being timed:', 1)
local k
if tab_before_marker then
k = tab_before_marker - 1
else
k = last_i - 1
while k >= 1 do
local ch = s:sub(k, k)
if ch == '\n' then
break
end
k = k - 1
end
k = k - 1
end
local head = s:sub(1, k)
@ -97,7 +107,8 @@ local function parse_and_strip_time_v(output)
return head, peak_mb
end
function M.run(cmd, stdin, timeout_ms, memory_mb)
---@param on_complete fun(result: ExecuteResult)
function M.run(cmd, stdin, timeout_ms, memory_mb, on_complete)
local time_bin = utils.time_path()
local timeout_bin = utils.timeout_path()
@ -109,78 +120,94 @@ function M.run(cmd, stdin, timeout_ms, memory_mb)
local sec = math.ceil(timeout_ms / 1000)
local timeout_prefix = ('%s -k 1s %ds '):format(timeout_bin, sec)
local sh = prefix .. timeout_prefix .. ('%s -v sh -c %q 2>&1'):format(time_bin, prog)
logger.log('run: ' .. sh)
local t0 = vim.uv.hrtime()
local r = vim
.system({ 'sh', '-c', sh }, {
stdin = stdin,
text = true,
})
:wait()
local dt = (vim.uv.hrtime() - t0) / 1e6
vim.system({ 'sh', '-c', sh }, { stdin = stdin, text = true }, function(r)
local dt = (vim.uv.hrtime() - t0) / 1e6
local code = r.code or 0
local raw = r.stdout or ''
local cleaned, peak_mb = parse_and_strip_time_v(raw)
local tled = code == 124
local code = r.code or 0
local raw = r.stdout or ''
local cleaned, peak_mb = parse_and_strip_time_v(raw)
local tled = code == 124
local signal = nil
if code >= 128 then
signal = constants.signal_codes[code]
end
local signal = nil
if code >= 128 then
signal = constants.signal_codes[code]
end
local lower = (cleaned or ''):lower()
local oom_hint = lower:find('std::bad_alloc', 1, true)
or lower:find('cannot allocate memory', 1, true)
or lower:find('out of memory', 1, true)
or lower:find('oom', 1, true)
or lower:find('enomem', 1, true)
local near_cap = peak_mb >= (0.90 * memory_mb)
local lower = (cleaned or ''):lower()
local oom_hint = lower:find('std::bad_alloc', 1, true)
or lower:find('cannot allocate memory', 1, true)
or lower:find('out of memory', 1, true)
or lower:find('oom', 1, true)
or lower:find('enomem', 1, true)
local near_cap = peak_mb >= (0.90 * memory_mb)
local mled = (peak_mb >= memory_mb) or near_cap or (oom_hint and not tled)
local mled = (peak_mb >= memory_mb) or near_cap or (oom_hint ~= nil and not tled)
if tled then
logger.log(('Execution timed out in %.1fms.'):format(dt))
elseif mled then
logger.log(('Execution memory limit exceeded in %.1fms.'):format(dt))
elseif code ~= 0 then
logger.log(('Execution failed in %.1fms (exit code %d).'):format(dt, code))
else
logger.log(('Execution successful in %.1fms.'):format(dt))
end
if tled then
logger.log(('Execution timed out in %.1fms.'):format(dt))
elseif mled then
logger.log(('Execution memory limit exceeded in %.1fms.'):format(dt))
elseif code ~= 0 then
logger.log(('Execution failed in %.1fms (exit code %d).'):format(dt, code))
else
logger.log(('Execution successful in %.1fms.'):format(dt))
end
return {
stdout = cleaned,
code = code,
time_ms = dt,
tled = tled,
mled = mled,
peak_mb = peak_mb,
signal = signal,
}
vim.schedule(function()
on_complete({
stdout = cleaned,
code = code,
time_ms = dt,
tled = tled,
mled = mled,
peak_mb = peak_mb,
signal = signal,
})
end)
end)
end
function M.compile_problem(debug)
---@param debug boolean?
---@param on_complete fun(result: {success: boolean, output: string?})
function M.compile_problem(debug, on_complete)
local state = require('cp.state')
local config = require('cp.config').get_config()
local platform = state.get_platform()
local language = state.get_language() or config.platforms[platform].default_language
local eff = config.runtime.effective[platform][language]
local source_file = state.get_source_file()
if source_file then
local buf = vim.fn.bufnr(source_file)
if buf ~= -1 and vim.api.nvim_buf_is_loaded(buf) and vim.bo[buf].modified then
vim.api.nvim_buf_call(buf, function()
vim.cmd.write({ mods = { silent = true, noautocmd = true } })
end)
end
end
local compile_config = (debug and eff.commands.debug) or eff.commands.build
if not compile_config then
return { success = true, output = nil }
on_complete({ success = true, output = nil })
return
end
require('cp.utils').ensure_dirs()
local binary = debug and state.get_debug_file() or state.get_binary_file()
local substitutions = { source = state.get_source_file(), binary = binary }
local r = M.compile(compile_config, substitutions)
if r.code ~= 0 then
return { success = false, output = r.stdout or 'unknown error' }
end
return { success = true, output = nil }
M.compile(compile_config, substitutions, function(r)
if r.code ~= 0 then
on_complete({ success = false, output = r.stdout or 'unknown error' })
else
on_complete({ success = true, output = nil })
end
end)
end
return M

View file

@ -101,8 +101,8 @@ end
---@param test_case RanTestCase
---@param debug boolean?
---@return { status: "pass"|"fail"|"tle"|"mle", actual: string, actual_highlights: Highlight[], error: string, stderr: string, time_ms: number, code: integer, ok: boolean, signal: string, tled: boolean, mled: boolean, rss_mb: number }
local function run_single_test_case(test_case, debug)
---@param on_complete fun(result: { status: "pass"|"fail"|"tle"|"mle", actual: string, actual_highlights: Highlight[], error: string, stderr: string, time_ms: number, code: integer, ok: boolean, signal: string?, tled: boolean, mled: boolean, rss_mb: number })
local function run_single_test_case(test_case, debug, on_complete)
local source_file = state.get_source_file()
local binary_file = debug and state.get_debug_file() or state.get_binary_file()
@ -117,65 +117,65 @@ local function run_single_test_case(test_case, debug)
local timeout_ms = (panel_state.constraints and panel_state.constraints.timeout_ms) or 0
local memory_mb = panel_state.constraints and panel_state.constraints.memory_mb or 0
local r = execute.run(cmd, stdin_content, timeout_ms, memory_mb)
execute.run(cmd, stdin_content, timeout_ms, memory_mb, function(r)
local ansi = require('cp.ui.ansi')
local out = r.stdout or ''
local highlights = {}
if out ~= '' then
if config.ui.ansi then
local parsed = ansi.parse_ansi_text(out)
out = table.concat(parsed.lines, '\n')
highlights = parsed.highlights
else
out = out:gsub('\027%[[%d;]*[a-zA-Z]', '')
end
end
local ansi = require('cp.ui.ansi')
local out = r.stdout or ''
local highlights = {}
if out ~= '' then
if config.ui.ansi then
local parsed = ansi.parse_ansi_text(out)
out = table.concat(parsed.lines, '\n')
highlights = parsed.highlights
local max_lines = config.ui.panel.max_output_lines
local lines = vim.split(out, '\n')
if #lines > max_lines then
local trimmed = {}
for i = 1, max_lines do
table.insert(trimmed, lines[i])
end
table.insert(trimmed, string.format('... (output trimmed after %d lines)', max_lines))
out = table.concat(trimmed, '\n')
end
local expected = test_case.expected or ''
local ok = normalize_lines(out) == normalize_lines(expected)
local signal = r.signal
if not signal and r.code and r.code >= 128 then
signal = constants.signal_codes[r.code]
end
local status
if r.tled then
status = 'tle'
elseif r.mled then
status = 'mle'
elseif ok then
status = 'pass'
else
out = out:gsub('\027%[[%d;]*[a-zA-Z]', '')
status = 'fail'
end
end
local max_lines = config.ui.panel.max_output_lines
local lines = vim.split(out, '\n')
if #lines > max_lines then
local trimmed = {}
for i = 1, max_lines do
table.insert(trimmed, lines[i])
end
table.insert(trimmed, string.format('... (output trimmed after %d lines)', max_lines))
out = table.concat(trimmed, '\n')
end
local expected = test_case.expected or ''
local ok = normalize_lines(out) == normalize_lines(expected)
local signal = r.signal
if not signal and r.code and r.code >= 128 then
signal = constants.signal_codes[r.code]
end
local status
if r.tled then
status = 'tle'
elseif r.mled then
status = 'mle'
elseif ok then
status = 'pass'
else
status = 'fail'
end
return {
status = status,
actual = out,
actual_highlights = highlights,
error = (r.code ~= 0 and not ok) and out or '',
stderr = '',
time_ms = r.time_ms,
code = r.code,
ok = ok,
signal = signal,
tled = r.tled or false,
mled = r.mled or false,
rss_mb = r.peak_mb or 0,
}
on_complete({
status = status,
actual = out,
actual_highlights = highlights,
error = (r.code ~= 0 and not ok) and out or '',
stderr = '',
time_ms = r.time_ms,
code = r.code,
ok = ok,
signal = signal,
tled = r.tled or false,
mled = r.mled or false,
rss_mb = r.peak_mb or 0,
})
end)
end
---@return boolean
@ -198,38 +198,76 @@ function M.load_test_cases()
return #tcs > 0
end
---@param debug boolean?
---@param on_complete fun(result: RanTestCase?)
function M.run_combined_test(debug, on_complete)
local combined = cache.get_combined_test(
state.get_platform() or '',
state.get_contest_id() or '',
state.get_problem_id()
)
if not combined then
logger.log('No combined test found', vim.log.levels.ERROR)
on_complete(nil)
return
end
local ran_test = {
index = 1,
input = combined.input,
expected = combined.expected,
status = 'running',
actual = nil,
time_ms = nil,
code = nil,
ok = nil,
signal = nil,
tled = false,
mled = false,
rss_mb = 0,
selected = true,
}
run_single_test_case(ran_test, debug, function(result)
on_complete(result)
end)
end
---@param index number
---@param debug boolean?
---@return boolean
function M.run_test_case(index, debug)
---@param on_complete fun(success: boolean)
function M.run_test_case(index, debug, on_complete)
local tc = panel_state.test_cases[index]
if not tc then
return false
on_complete(false)
return
end
tc.status = 'running'
local r = run_single_test_case(tc, debug)
run_single_test_case(tc, debug, function(r)
tc.status = r.status
tc.actual = r.actual
tc.actual_highlights = r.actual_highlights
tc.error = r.error
tc.stderr = r.stderr
tc.time_ms = r.time_ms
tc.code = r.code
tc.ok = r.ok
tc.signal = r.signal
tc.tled = r.tled
tc.mled = r.mled
tc.rss_mb = r.rss_mb
tc.status = r.status
tc.actual = r.actual
tc.actual_highlights = r.actual_highlights
tc.error = r.error
tc.stderr = r.stderr
tc.time_ms = r.time_ms
tc.code = r.code
tc.ok = r.ok
tc.signal = r.signal
tc.tled = r.tled
tc.mled = r.mled
tc.rss_mb = r.rss_mb
return true
on_complete(true)
end)
end
---@param indices? integer[]
---@param debug boolean?
---@return RanTestCase[]
function M.run_all_test_cases(indices, debug)
---@param on_each? fun(index: integer, total: integer)
---@param on_done fun(results: RanTestCase[])
function M.run_all_test_cases(indices, debug, on_each, on_done)
local to_run = indices
if not to_run then
to_run = {}
@ -238,11 +276,26 @@ function M.run_all_test_cases(indices, debug)
end
end
for _, i in ipairs(to_run) do
M.run_test_case(i, debug)
local function run_next(pos)
if pos > #to_run then
logger.log(
('Finished %s %d test cases.'):format(debug and 'debugging' or 'running', #to_run),
vim.log.levels.INFO,
true
)
on_done(panel_state.test_cases)
return
end
M.run_test_case(to_run[pos], debug, function()
if on_each then
on_each(pos, #to_run)
end
run_next(pos + 1)
end)
end
return panel_state.test_cases
run_next(1)
end
---@return PanelState

View file

@ -4,6 +4,10 @@
local M = {}
local function strwidth(s)
return vim.api.nvim_strwidth(s)
end
local exit_code_names = {
[128] = 'SIGHUP',
[129] = 'SIGINT',
@ -26,6 +30,12 @@ local exit_code_names = {
---@param ran_test_case RanTestCase
---@return StatusInfo
function M.get_status_info(ran_test_case)
if ran_test_case.status == 'pending' then
return { text = '...', highlight_group = 'CpTestNA' }
elseif ran_test_case.status == 'running' then
return { text = 'RUN', highlight_group = 'CpTestNA' }
end
if ran_test_case.ok then
return { text = 'AC', highlight_group = 'CpTestAC' }
end
@ -34,7 +44,7 @@ function M.get_status_info(ran_test_case)
return { text = 'TLE', highlight_group = 'CpTestTLE' }
elseif ran_test_case.mled then
return { text = 'MLE', highlight_group = 'CpTestMLE' }
elseif ran_test_case.code > 0 and ran_test_case.code >= 128 then
elseif ran_test_case.code and ran_test_case.code >= 128 then
return { text = 'RTE', highlight_group = 'CpTestRTE' }
elseif ran_test_case.code == 0 and not ran_test_case.ok then
return { text = 'WA', highlight_group = 'CpTestWA' }
@ -63,24 +73,24 @@ local function compute_cols(test_state)
for i, tc in ipairs(test_state.test_cases) do
local prefix = (i == test_state.current_index) and '>' or ' '
w.num = math.max(w.num, #(' ' .. prefix .. i .. ' '))
w.status = math.max(w.status, #(' ' .. M.get_status_info(tc).text .. ' '))
w.num = math.max(w.num, strwidth(' ' .. prefix .. i .. ' '))
w.status = math.max(w.status, strwidth(' ' .. M.get_status_info(tc).text .. ' '))
local time_str = tc.time_ms and string.format('%.2f', tc.time_ms) or ''
w.time = math.max(w.time, #(' ' .. time_str .. ' '))
w.timeout = math.max(w.timeout, #(' ' .. timeout_str .. ' '))
w.time = math.max(w.time, strwidth(' ' .. time_str .. ' '))
w.timeout = math.max(w.timeout, strwidth(' ' .. timeout_str .. ' '))
local rss_str = (tc.rss_mb and string.format('%.0f', tc.rss_mb)) or ''
w.rss = math.max(w.rss, #(' ' .. rss_str .. ' '))
w.memory = math.max(w.memory, #(' ' .. memory_str .. ' '))
w.exit = math.max(w.exit, #(' ' .. format_exit_code(tc.code) .. ' '))
w.rss = math.max(w.rss, strwidth(' ' .. rss_str .. ' '))
w.memory = math.max(w.memory, strwidth(' ' .. memory_str .. ' '))
w.exit = math.max(w.exit, strwidth(' ' .. format_exit_code(tc.code) .. ' '))
end
w.num = math.max(w.num, #' # ')
w.status = math.max(w.status, #' Status ')
w.time = math.max(w.time, #' Runtime (ms) ')
w.timeout = math.max(w.timeout, #' Time (ms) ')
w.rss = math.max(w.rss, #' RSS (MB) ')
w.memory = math.max(w.memory, #' Mem (MB) ')
w.exit = math.max(w.exit, #' Exit Code ')
w.num = math.max(w.num, strwidth(' # '))
w.status = math.max(w.status, strwidth(' Status '))
w.time = math.max(w.time, strwidth(' Runtime (ms) '))
w.timeout = math.max(w.timeout, strwidth(' Time (ms) '))
w.rss = math.max(w.rss, strwidth(' RSS (MB) '))
w.memory = math.max(w.memory, strwidth(' Mem (MB) '))
w.exit = math.max(w.exit, strwidth(' Exit Code '))
local sum = w.num + w.status + w.time + w.timeout + w.rss + w.memory + w.exit
local inner = sum + 6
@ -89,7 +99,7 @@ local function compute_cols(test_state)
end
local function center(text, width)
local pad = width - #text
local pad = width - strwidth(text)
if pad <= 0 then
return text
end
@ -101,7 +111,7 @@ local function format_num_column(prefix, idx, width)
local num_str = tostring(idx)
local content = (#num_str == 1) and (' ' .. prefix .. ' ' .. num_str .. ' ')
or (' ' .. prefix .. num_str .. ' ')
local total_pad = width - #content
local total_pad = width - strwidth(content)
if total_pad <= 0 then
return content
end
@ -314,10 +324,10 @@ function M.render_test_list(test_state)
for _, input_line in ipairs(vim.split(tc.input, '\n', { plain = true, trimempty = false })) do
local s = input_line or ''
if #s > c.inner then
if strwidth(s) > c.inner then
s = string.sub(s, 1, c.inner)
end
local pad = c.inner - #s
local pad = c.inner - strwidth(s)
table.insert(lines, '' .. s .. string.rep(' ', pad) .. '')
end
@ -357,14 +367,12 @@ end
---@return table<string, table>
function M.get_highlight_groups()
return {
CpTestAC = { fg = '#10b981' },
CpTestWA = { fg = '#ef4444' },
CpTestTLE = { fg = '#f59e0b' },
CpTestMLE = { fg = '#f59e0b' },
CpTestRTE = { fg = '#8b5cf6' },
CpTestNA = { fg = '#6b7280' },
CpDiffRemoved = { fg = '#ef4444', bg = '#1f1f1f' },
CpDiffAdded = { fg = '#10b981', bg = '#1f1f1f' },
CpTestAC = { link = 'DiagnosticOk' },
CpTestWA = { link = 'DiagnosticError' },
CpTestTLE = { link = 'DiagnosticWarn' },
CpTestMLE = { link = 'DiagnosticWarn' },
CpTestRTE = { link = 'DiagnosticHint' },
CpTestNA = { link = 'Comment' },
}
end

View file

@ -25,10 +25,27 @@ end
---@param args string[]
---@param opts { sync?: boolean, ndjson?: boolean, on_event?: fun(ev: table), on_exit?: fun(result: table) }
local function run_scraper(platform, subcommand, args, opts)
if not utils.setup_python_env() then
local msg = 'no Python environment available (install uv or nix)'
logger.log(msg, vim.log.levels.ERROR)
if opts and opts.on_exit then
opts.on_exit({ success = false, error = msg })
end
return { success = false, error = msg }
end
local plugin_path = utils.get_plugin_path()
local cmd = { 'uv', 'run', '--directory', plugin_path, '-m', 'scrapers.' .. platform, subcommand }
local cmd = utils.get_python_cmd(platform, plugin_path)
vim.list_extend(cmd, { subcommand })
vim.list_extend(cmd, args)
logger.log('scraper cmd: ' .. table.concat(cmd, ' '))
local env = vim.fn.environ()
env.VIRTUAL_ENV = ''
env.PYTHONPATH = ''
env.CONDA_PREFIX = ''
if opts and opts.ndjson then
local uv = vim.loop
local stdout = uv.new_pipe(false)
@ -36,31 +53,32 @@ local function run_scraper(platform, subcommand, args, opts)
local buf = ''
local handle
handle = uv.spawn(
cmd[1],
{ args = vim.list_slice(cmd, 2), stdio = { nil, stdout, stderr } },
function(code, signal)
if buf ~= '' and opts.on_event then
local ok_tail, ev_tail = pcall(vim.json.decode, buf)
if ok_tail then
opts.on_event(ev_tail)
end
buf = ''
end
if opts.on_exit then
opts.on_exit({ success = (code == 0), code = code, signal = signal })
end
if not stdout:is_closing() then
stdout:close()
end
if not stderr:is_closing() then
stderr:close()
end
if handle and not handle:is_closing() then
handle:close()
handle = uv.spawn(cmd[1], {
args = vim.list_slice(cmd, 2),
stdio = { nil, stdout, stderr },
env = env,
cwd = plugin_path,
}, function(code, signal)
if buf ~= '' and opts.on_event then
local ok_tail, ev_tail = pcall(vim.json.decode, buf)
if ok_tail then
opts.on_event(ev_tail)
end
buf = ''
end
)
if opts.on_exit then
opts.on_exit({ success = (code == 0), code = code, signal = signal })
end
if not stdout:is_closing() then
stdout:close()
end
if not stderr:is_closing() then
stderr:close()
end
if handle and not handle:is_closing() then
handle:close()
end
end)
if not handle then
logger.log('Failed to start scraper process', vim.log.levels.ERROR)
@ -97,7 +115,7 @@ local function run_scraper(platform, subcommand, args, opts)
return
end
local sysopts = { text = true, timeout = 30000 }
local sysopts = { text = true, timeout = 30000, env = env, cwd = plugin_path }
if opts and opts.sync then
local result = vim.system(cmd, sysopts):wait()
return syshandle(result)
@ -181,7 +199,7 @@ function M.scrape_all_tests(platform, contest_id, callback)
return
end
vim.schedule(function()
vim.system({ 'mkdir', '-p', 'build', 'io' }):wait()
require('cp.utils').ensure_dirs()
local config = require('cp.config')
local base_name = config.default_filename(contest_id, ev.problem_id)
for i, t in ipairs(ev.tests) do
@ -189,15 +207,17 @@ function M.scrape_all_tests(platform, contest_id, callback)
local expected_file = 'io/' .. base_name .. '.' .. i .. '.cpout'
local input_content = t.input:gsub('\r', '')
local expected_content = t.expected:gsub('\r', '')
vim.fn.writefile(vim.split(input_content, '\n', { trimempty = true }), input_file)
vim.fn.writefile(vim.split(expected_content, '\n', { trimempty = true }), expected_file)
vim.fn.writefile(vim.split(input_content, '\n'), input_file)
vim.fn.writefile(vim.split(expected_content, '\n'), expected_file)
end
if type(callback) == 'function' then
callback({
combined = ev.combined,
tests = ev.tests,
timeout_ms = ev.timeout_ms or 0,
memory_mb = ev.memory_mb or 0,
interactive = ev.interactive or false,
multi_test = ev.multi_test or false,
problem_id = ev.problem_id,
})
end

View file

@ -82,7 +82,7 @@ local function start_tests(platform, contest_id, problems)
return not vim.tbl_isempty(cache.get_test_cases(platform, contest_id, p.id))
end, problems)
if cached_len ~= #problems then
logger.log(('Fetching test cases... (%d/%d)'):format(cached_len, #problems))
logger.log(('Fetching %s/%s problem tests...'):format(cached_len, #problems))
scraper.scrape_all_tests(platform, contest_id, function(ev)
local cached_tests = {}
if not ev.interactive and vim.tbl_isempty(ev.tests) then
@ -95,22 +95,21 @@ local function start_tests(platform, contest_id, problems)
platform,
contest_id,
ev.problem_id,
ev.combined,
cached_tests,
ev.timeout_ms or 0,
ev.memory_mb or 0,
ev.interactive
ev.interactive,
ev.multi_test
)
local io_state = state.get_io_view_state()
if io_state then
local test_cases = cache.get_test_cases(platform, contest_id, state.get_problem_id())
local input_lines = {}
for _, tc in ipairs(test_cases) do
for _, line in ipairs(vim.split(tc.input, '\n')) do
table.insert(input_lines, line)
end
local combined_test = cache.get_combined_test(platform, contest_id, state.get_problem_id())
if combined_test then
local input_lines = vim.split(combined_test.input, '\n')
require('cp.utils').update_buffer_content(io_state.input_buf, input_lines, nil, nil)
end
require('cp.utils').update_buffer_content(io_state.input_buf, input_lines, nil, nil)
end
end)
end
@ -161,6 +160,8 @@ function M.setup_contest(platform, contest_id, problem_id, language)
vim.bo[bufnr].buftype = ''
vim.bo[bufnr].swapfile = false
state.set_language(lang)
if cfg.hooks and cfg.hooks.setup_code and not vim.b[bufnr].cp_setup_done then
local ok = pcall(cfg.hooks.setup_code, state)
if ok then
@ -217,7 +218,16 @@ function M.setup_problem(problem_id, language)
return
end
local old_problem_id = state.get_problem_id()
state.set_problem_id(problem_id)
if old_problem_id ~= problem_id then
local io_state = state.get_io_view_state()
if io_state and io_state.output_buf and vim.api.nvim_buf_is_valid(io_state.output_buf) then
local utils = require('cp.utils')
utils.update_buffer_content(io_state.output_buf, {}, nil, nil)
end
end
local config = config_module.get_config()
local lang = language
or (config.platforms[platform] and config.platforms[platform].default_language)
@ -242,60 +252,66 @@ function M.setup_problem(problem_id, language)
local prov = state.get_provisional()
if prov and prov.platform == platform and prov.contest_id == (state.get_contest_id() or '') then
if vim.api.nvim_buf_is_valid(prov.bufnr) then
vim.api.nvim_buf_set_name(prov.bufnr, source_file)
vim.bo[prov.bufnr].swapfile = true
-- selene: allow(mixed_table)
vim.cmd.write({
vim.fn.fnameescape(source_file),
bang = true,
mods = { silent = true, noautocmd = true, keepalt = true },
})
state.set_solution_win(vim.api.nvim_get_current_win())
if config.hooks and config.hooks.setup_code and not vim.b[prov.bufnr].cp_setup_done then
local ok = pcall(config.hooks.setup_code, state)
if ok then
local existing_bufnr = vim.fn.bufnr(source_file)
if existing_bufnr ~= -1 then
vim.api.nvim_buf_delete(prov.bufnr, { force = true })
state.set_provisional(nil)
else
vim.api.nvim_buf_set_name(prov.bufnr, source_file)
vim.bo[prov.bufnr].swapfile = true
-- selene: allow(mixed_table)
vim.cmd.write({
vim.fn.fnameescape(source_file),
bang = true,
mods = { silent = true, noautocmd = true, keepalt = true },
})
state.set_solution_win(vim.api.nvim_get_current_win())
if config.hooks and config.hooks.setup_code and not vim.b[prov.bufnr].cp_setup_done then
local ok = pcall(config.hooks.setup_code, state)
if ok then
vim.b[prov.bufnr].cp_setup_done = true
end
elseif not vim.b[prov.bufnr].cp_setup_done then
helpers.clearcol(prov.bufnr)
vim.b[prov.bufnr].cp_setup_done = true
end
elseif not vim.b[prov.bufnr].cp_setup_done then
helpers.clearcol(prov.bufnr)
vim.b[prov.bufnr].cp_setup_done = true
cache.set_file_state(
vim.fn.fnamemodify(source_file, ':p'),
platform,
state.get_contest_id() or '',
state.get_problem_id() or '',
lang
)
require('cp.ui.views').ensure_io_view()
state.set_provisional(nil)
return
end
cache.set_file_state(
vim.fn.fnamemodify(source_file, ':p'),
platform,
state.get_contest_id() or '',
state.get_problem_id() or '',
lang
)
require('cp.ui.views').ensure_io_view()
else
state.set_provisional(nil)
end
state.set_provisional(nil)
return
end
vim.schedule(function()
vim.cmd.only({ mods = { silent = true } })
vim.cmd.e(source_file)
local bufnr = vim.api.nvim_get_current_buf()
state.set_solution_win(vim.api.nvim_get_current_win())
if config.hooks and config.hooks.setup_code and not vim.b[bufnr].cp_setup_done then
local ok = pcall(config.hooks.setup_code, state)
if ok then
vim.b[bufnr].cp_setup_done = true
end
elseif not vim.b[bufnr].cp_setup_done then
helpers.clearcol(bufnr)
vim.cmd.only({ mods = { silent = true } })
vim.cmd.e(source_file)
local bufnr = vim.api.nvim_get_current_buf()
state.set_solution_win(vim.api.nvim_get_current_win())
require('cp.ui.views').ensure_io_view()
if config.hooks and config.hooks.setup_code and not vim.b[bufnr].cp_setup_done then
local ok = pcall(config.hooks.setup_code, state)
if ok then
vim.b[bufnr].cp_setup_done = true
end
cache.set_file_state(
vim.fn.expand('%:p'),
platform,
state.get_contest_id() or '',
state.get_problem_id() or '',
lang
)
require('cp.ui.views').ensure_io_view()
end)
elseif not vim.b[bufnr].cp_setup_done then
helpers.clearcol(bufnr)
vim.b[bufnr].cp_setup_done = true
end
cache.set_file_state(
vim.fn.expand('%:p'),
platform,
state.get_contest_id() or '',
state.get_problem_id() or '',
lang
)
end
---@param direction integer
@ -334,6 +350,8 @@ function M.navigate_problem(direction, language)
return
end
logger.log(('navigate_problem: %s -> %s'):format(current_problem_id, problems[new_index].id))
local active_panel = state.get_active_panel()
if active_panel == 'run' then
require('cp.ui.views').disable()
@ -364,6 +382,12 @@ function M.navigate_problem(direction, language)
end
end
local io_state = state.get_io_view_state()
if io_state and io_state.output_buf and vim.api.nvim_buf_is_valid(io_state.output_buf) then
local utils = require('cp.utils')
utils.update_buffer_content(io_state.output_buf, {}, nil, nil)
end
M.setup_contest(platform, contest_id, problems[new_index].id, lang)
end

View file

@ -9,9 +9,8 @@
---@class cp.IoViewState
---@field output_buf integer
---@field input_buf integer
---@field output_win integer
---@field input_win integer
---@field current_test_index integer?
---@field source_buf integer?
---@class cp.State
---@field get_platform fun(): string?
@ -200,19 +199,7 @@ end
---@return cp.IoViewState?
function M.get_io_view_state()
if not state.io_view_state then
return nil
end
local s = state.io_view_state
if
vim.api.nvim_buf_is_valid(s.output_buf)
and vim.api.nvim_buf_is_valid(s.input_buf)
and vim.api.nvim_win_is_valid(s.output_win)
and vim.api.nvim_win_is_valid(s.input_win)
then
return s
end
return nil
return state.io_view_state
end
---@param s cp.IoViewState?

View file

@ -90,7 +90,7 @@ local function delete_current_test()
return
end
if #edit_state.test_buffers == 1 then
logger.log('Cannot have 0 problem tests.', vim.log.levels.ERROR)
logger.log('Problems must have at least one test case.', vim.log.levels.ERROR)
return
end
@ -217,6 +217,32 @@ setup_keybindings = function(buf)
{ buffer = buf, silent = true, desc = 'Add test' }
)
end
local augroup = vim.api.nvim_create_augroup('cp_edit_guard', { clear = false })
vim.api.nvim_create_autocmd({ 'BufDelete', 'BufWipeout' }, {
group = augroup,
buffer = buf,
callback = function()
vim.schedule(function()
if not edit_state then
return
end
local is_tracked = false
for _, pair in ipairs(edit_state.test_buffers) do
if pair.input_buf == buf or pair.expected_buf == buf then
is_tracked = true
break
end
end
if is_tracked then
logger.log('Test buffer closed unexpectedly. Exiting editor.', vim.log.levels.WARN)
M.toggle_edit()
end
end)
end,
})
end
local function save_all_tests()
@ -244,14 +270,34 @@ local function save_all_tests()
end
end
local contest_data = cache.get_contest_data(platform, contest_id)
local is_multi_test = contest_data.problems[contest_data.index_map[problem_id]].multi_test
or false
-- Generate combined test from individual test cases
local combined_input = table.concat(
vim.tbl_map(function(tc)
return tc.input
end, edit_state.test_cases),
'\n'
)
local combined_expected = table.concat(
vim.tbl_map(function(tc)
return tc.expected
end, edit_state.test_cases),
'\n'
)
cache.set_test_cases(
platform,
contest_id,
problem_id,
{ input = combined_input, expected = combined_expected },
edit_state.test_cases,
edit_state.constraints and edit_state.constraints.timeout_ms or 0,
edit_state.constraints and edit_state.constraints.memory_mb or 0,
false
false,
is_multi_test
)
local config = config_module.get_config()
@ -279,6 +325,8 @@ function M.toggle_edit(test_index)
save_all_tests()
edit_state = nil
pcall(vim.api.nvim_clear_autocmds, { group = 'cp_edit_guard' })
local saved = state.get_saved_session()
if saved then
vim.fn.delete(saved)

View file

@ -26,7 +26,7 @@ local function parse_diff_line(text)
line = 0,
col_start = highlight_start,
col_end = #result_text,
highlight_group = 'CpDiffRemoved',
highlight_group = 'DiffDelete',
})
pos = removed_end + 1
else
@ -38,7 +38,7 @@ local function parse_diff_line(text)
line = 0,
col_start = highlight_start,
col_end = #result_text,
highlight_group = 'CpDiffAdded',
highlight_group = 'DiffAdd',
})
pos = added_end + 1
else

View file

@ -3,7 +3,13 @@ local M = {}
local helpers = require('cp.helpers')
local utils = require('cp.utils')
local function create_none_diff_layout(parent_win, expected_content, actual_content)
M.DIFF_MODES = {
['side-by-side'] = 'side-by-side',
vim = 'vim',
git = 'git',
}
local function create_side_by_side_layout(parent_win, expected_content, actual_content)
local expected_buf = utils.create_buffer_with_options()
local actual_buf = utils.create_buffer_with_options()
helpers.clearcol(expected_buf)
@ -21,8 +27,13 @@ local function create_none_diff_layout(parent_win, expected_content, actual_cont
vim.api.nvim_set_option_value('filetype', 'cp', { buf = expected_buf })
vim.api.nvim_set_option_value('filetype', 'cp', { buf = actual_buf })
vim.api.nvim_set_option_value('winbar', 'Expected', { win = expected_win })
vim.api.nvim_set_option_value('winbar', 'Actual', { win = actual_win })
local label = M.DIFF_MODES['side-by-side']
vim.api.nvim_set_option_value(
'winbar',
('expected (diff: %s)'):format(label),
{ win = expected_win }
)
vim.api.nvim_set_option_value('winbar', ('actual (diff: %s)'):format(label), { win = actual_win })
local expected_lines = vim.split(expected_content, '\n', { plain = true, trimempty = true })
local actual_lines = vim.split(actual_content, '\n', { plain = true })
@ -33,6 +44,7 @@ local function create_none_diff_layout(parent_win, expected_content, actual_cont
return {
buffers = { expected_buf, actual_buf },
windows = { expected_win, actual_win },
mode = 'side-by-side',
cleanup = function()
pcall(vim.api.nvim_win_close, expected_win, true)
pcall(vim.api.nvim_win_close, actual_win, true)
@ -60,8 +72,13 @@ local function create_vim_diff_layout(parent_win, expected_content, actual_conte
vim.api.nvim_set_option_value('filetype', 'cp', { buf = expected_buf })
vim.api.nvim_set_option_value('filetype', 'cp', { buf = actual_buf })
vim.api.nvim_set_option_value('winbar', 'Expected', { win = expected_win })
vim.api.nvim_set_option_value('winbar', 'Actual', { win = actual_win })
local label = M.DIFF_MODES.vim
vim.api.nvim_set_option_value(
'winbar',
('expected (diff: %s)'):format(label),
{ win = expected_win }
)
vim.api.nvim_set_option_value('winbar', ('actual (diff: %s)'):format(label), { win = actual_win })
local expected_lines = vim.split(expected_content, '\n', { plain = true, trimempty = true })
local actual_lines = vim.split(actual_content, '\n', { plain = true })
@ -83,6 +100,7 @@ local function create_vim_diff_layout(parent_win, expected_content, actual_conte
return {
buffers = { expected_buf, actual_buf },
windows = { expected_win, actual_win },
mode = 'vim',
cleanup = function()
pcall(vim.api.nvim_win_close, expected_win, true)
pcall(vim.api.nvim_win_close, actual_win, true)
@ -103,7 +121,8 @@ local function create_git_diff_layout(parent_win, expected_content, actual_conte
vim.api.nvim_win_set_buf(diff_win, diff_buf)
vim.api.nvim_set_option_value('filetype', 'cp', { buf = diff_buf })
vim.api.nvim_set_option_value('winbar', 'Expected vs Actual', { win = diff_win })
local label = M.DIFF_MODES.git
vim.api.nvim_set_option_value('winbar', ('diff: %s'):format(label), { win = diff_win })
local diff_backend = require('cp.ui.diff')
local backend = diff_backend.get_best_backend('git')
@ -121,6 +140,7 @@ local function create_git_diff_layout(parent_win, expected_content, actual_conte
return {
buffers = { diff_buf },
windows = { diff_win },
mode = 'git',
cleanup = function()
pcall(vim.api.nvim_win_close, diff_win, true)
pcall(vim.api.nvim_buf_delete, diff_buf, { force = true })
@ -143,6 +163,7 @@ local function create_single_layout(parent_win, content)
return {
buffers = { buf },
windows = { win },
mode = 'single',
cleanup = function()
pcall(vim.api.nvim_win_close, win, true)
pcall(vim.api.nvim_buf_delete, buf, { force = true })
@ -153,12 +174,14 @@ end
function M.create_diff_layout(mode, parent_win, expected_content, actual_content)
if mode == 'single' then
return create_single_layout(parent_win, actual_content)
elseif mode == 'none' then
return create_none_diff_layout(parent_win, expected_content, actual_content)
elseif mode == 'side-by-side' then
return create_side_by_side_layout(parent_win, expected_content, actual_content)
elseif mode == 'git' then
return create_git_diff_layout(parent_win, expected_content, actual_content)
else
elseif mode == 'vim' then
return create_vim_diff_layout(parent_win, expected_content, actual_content)
else
return create_side_by_side_layout(parent_win, expected_content, actual_content)
end
end
@ -191,12 +214,13 @@ function M.update_diff_panes(
actual_content = actual_content
end
local desired_mode = is_compilation_failure and 'single' or config.ui.panel.diff_mode
local default_mode = config.ui.panel.diff_modes[1]
local desired_mode = is_compilation_failure and 'single' or (current_mode or default_mode)
local highlight = require('cp.ui.highlight')
local diff_namespace = highlight.create_namespace()
local ansi_namespace = vim.api.nvim_create_namespace('cp_ansi_highlights')
if current_diff_layout and current_mode ~= desired_mode then
if current_diff_layout and current_diff_layout.mode ~= desired_mode then
local saved_pos = vim.api.nvim_win_get_cursor(0)
current_diff_layout.cleanup()
current_diff_layout = nil
@ -251,7 +275,7 @@ function M.update_diff_panes(
ansi_namespace
)
end
elseif desired_mode == 'none' then
elseif desired_mode == 'side-by-side' then
local expected_lines = vim.split(expected_content, '\n', { plain = true, trimempty = true })
local actual_lines = vim.split(actual_content, '\n', { plain = true })
utils.update_buffer_content(current_diff_layout.buffers[1], expected_lines, {})

File diff suppressed because it is too large Load diff

View file

@ -2,6 +2,9 @@ local M = {}
local logger = require('cp.log')
local _nix_python = nil
local _nix_discovered = false
local uname = vim.loop.os_uname()
local _time_cached = false
@ -57,7 +60,11 @@ local function find_gnu_time()
_time_cached = true
_time_path = nil
_time_reason = 'GNU time not found'
if uname and uname.sysname == 'Darwin' then
_time_reason = 'GNU time not found (install via: brew install coreutils)'
else
_time_reason = 'GNU time not found'
end
return _time_path, _time_reason
end
@ -79,46 +86,146 @@ function M.get_plugin_path()
return vim.fn.fnamemodify(plugin_path, ':h:h:h')
end
---@return boolean
function M.is_nix_build()
return _nix_python ~= nil
end
---@return string|nil
function M.get_nix_python()
return _nix_python
end
---@return boolean
function M.is_nix_discovered()
return _nix_discovered
end
---@param module string
---@param plugin_path string
---@return string[]
function M.get_python_cmd(module, plugin_path)
if _nix_python then
return { _nix_python, '-m', 'scrapers.' .. module }
end
return { 'uv', 'run', '--directory', plugin_path, '-m', 'scrapers.' .. module }
end
local python_env_setup = false
---@return boolean
local function discover_nix_python()
local cache_dir = vim.fn.stdpath('cache') .. '/cp-nvim'
local cache_file = cache_dir .. '/nix-python'
local f = io.open(cache_file, 'r')
if f then
local cached = f:read('*l')
f:close()
if cached and vim.fn.executable(cached) == 1 then
_nix_python = cached
return true
end
end
local plugin_path = M.get_plugin_path()
vim.notify('[cp.nvim] Building Python environment with nix...', vim.log.levels.INFO)
vim.cmd.redraw()
local result = vim
.system(
{ 'nix', 'build', plugin_path .. '#pythonEnv', '--no-link', '--print-out-paths' },
{ text = true }
)
:wait()
if result.code ~= 0 then
logger.log('nix build #pythonEnv failed: ' .. (result.stderr or ''), vim.log.levels.WARN)
return false
end
local store_path = result.stdout:gsub('%s+$', '')
local python_path = store_path .. '/bin/python3'
if vim.fn.executable(python_path) ~= 1 then
logger.log('nix python not executable at ' .. python_path, vim.log.levels.WARN)
return false
end
vim.fn.mkdir(cache_dir, 'p')
f = io.open(cache_file, 'w')
if f then
f:write(python_path)
f:close()
end
_nix_python = python_path
_nix_discovered = true
return true
end
---@return boolean success
function M.setup_python_env()
if python_env_setup then
return true
end
local plugin_path = M.get_plugin_path()
local venv_dir = plugin_path .. '/.venv'
if vim.fn.executable('uv') == 0 then
logger.log(
'uv is not installed. Install it to enable problem scraping: https://docs.astral.sh/uv/',
vim.log.levels.WARN
)
return false
if _nix_python then
logger.log('Python env: nix (python=' .. _nix_python .. ')')
python_env_setup = true
return true
end
if vim.fn.isdirectory(venv_dir) == 0 then
logger.log('Setting up Python environment for scrapers...')
local result = vim.system({ 'uv', 'sync' }, { cwd = plugin_path, text = true }):wait()
if vim.fn.executable('uv') == 1 then
local plugin_path = M.get_plugin_path()
logger.log('Python env: uv sync (dir=' .. plugin_path .. ')')
vim.notify('[cp.nvim] Setting up Python environment...', vim.log.levels.INFO)
vim.cmd.redraw()
local env = vim.fn.environ()
env.VIRTUAL_ENV = ''
env.PYTHONPATH = ''
env.CONDA_PREFIX = ''
local result = vim
.system({ 'uv', 'sync' }, { cwd = plugin_path, text = true, env = env })
:wait()
if result.code ~= 0 then
logger.log('Failed to setup Python environment: ' .. result.stderr, vim.log.levels.ERROR)
logger.log(
'Failed to setup Python environment: ' .. (result.stderr or ''),
vim.log.levels.ERROR
)
return false
end
logger.log('Python environment setup complete.')
if result.stderr and result.stderr ~= '' then
logger.log('uv sync stderr: ' .. result.stderr:gsub('%s+$', ''))
end
python_env_setup = true
return true
end
python_env_setup = true
return true
if vim.fn.executable('nix') == 1 then
logger.log('Python env: nix discovery')
if discover_nix_python() then
python_env_setup = true
return true
end
end
logger.log(
'No Python environment available. Install uv (https://docs.astral.sh/uv/) or use nix.',
vim.log.levels.WARN
)
return false
end
--- Configure the buffer with good defaults
---@param filetype? string
function M.create_buffer_with_options(filetype)
local buf = vim.api.nvim_create_buf(false, true)
vim.api.nvim_set_option_value('bufhidden', 'wipe', { buf = buf })
vim.api.nvim_set_option_value('bufhidden', 'hide', { buf = buf })
vim.api.nvim_set_option_value('readonly', true, { buf = buf })
vim.api.nvim_set_option_value('modifiable', false, { buf = buf })
if filetype then
vim.api.nvim_set_option_value('filetype', filetype, { buf = buf })
end
@ -155,20 +262,12 @@ function M.check_required_runtime()
local time = M.time_capability()
if not time.ok then
return false, 'GNU time not found: ' .. (time.reason or '')
return false, time.reason
end
local timeout = M.timeout_capability()
if not timeout.ok then
return false, 'GNU timeout not found: ' .. (timeout.reason or '')
end
if vim.fn.executable('uv') ~= 1 then
return false, 'uv not found (https://docs.astral.sh/uv/)'
end
if not M.setup_python_env() then
return false, 'failed to set up Python virtual environment'
return false, timeout.reason
end
return true
@ -218,7 +317,11 @@ local function find_gnu_timeout()
_timeout_cached = true
_timeout_path = nil
_timeout_reason = 'GNU timeout not found'
if uname and uname.sysname == 'Darwin' then
_timeout_reason = 'GNU timeout not found (install via: brew install coreutils)'
else
_timeout_reason = 'GNU timeout not found'
end
return _timeout_path, _timeout_reason
end
@ -255,4 +358,8 @@ function M.cwd_executables()
return out
end
function M.ensure_dirs()
vim.system({ 'mkdir', '-p', 'build', 'io' }):wait()
end
return M

View file

@ -154,3 +154,17 @@ end, {
return {}
end,
})
local function cp_action(action)
return function()
require('cp').handle_command({ fargs = { action } })
end
end
vim.keymap.set('n', '<Plug>(cp-run)', cp_action('run'), { desc = 'CP run tests' })
vim.keymap.set('n', '<Plug>(cp-panel)', cp_action('panel'), { desc = 'CP open panel' })
vim.keymap.set('n', '<Plug>(cp-edit)', cp_action('edit'), { desc = 'CP edit test cases' })
vim.keymap.set('n', '<Plug>(cp-next)', cp_action('next'), { desc = 'CP next problem' })
vim.keymap.set('n', '<Plug>(cp-prev)', cp_action('prev'), { desc = 'CP previous problem' })
vim.keymap.set('n', '<Plug>(cp-pick)', cp_action('pick'), { desc = 'CP pick contest' })
vim.keymap.set('n', '<Plug>(cp-interact)', cp_action('interact'), { desc = 'CP interactive mode' })

View file

@ -1,7 +1,7 @@
[project]
name = "scrapers"
version = "0.1.0"
description = "Add your description here"
description = "Competitive programming scrapers for a variety of web platforms."
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
@ -12,18 +12,18 @@ dependencies = [
"ndjson>=0.3.1",
"pydantic>=2.11.10",
"requests>=2.32.5",
"scrapling[fetchers]>=0.3.5",
]
[dependency-groups]
dev = [
"mypy>=1.18.2",
"types-beautifulsoup4>=4.12.0.20250516",
"types-requests>=2.32.4.20250913",
"pytest>=8.0.0",
"pytest-mock>=3.12.0",
"pre-commit>=4.3.0",
"basedpyright>=1.31.6",
"ruff>=0.14.2",
"ty>=0.0.1a32",
]
[tool.pytest.ini_options]

View file

@ -16,6 +16,7 @@ from urllib3.util.retry import Retry
from .base import BaseScraper
from .models import (
CombinedTest,
ContestListResult,
ContestSummary,
MetadataResult,
@ -242,7 +243,7 @@ def _to_problem_summaries(rows: list[dict[str, str]]) -> list[ProblemSummary]:
async def _fetch_all_contests_async() -> list[ContestSummary]:
async with httpx.AsyncClient(
limits=httpx.Limits(max_connections=100, max_keepalive_connections=100)
limits=httpx.Limits(max_connections=100, max_keepalive_connections=100),
) as client:
first_html = await _get_async(client, ARCHIVE_URL)
last = _parse_last_page(first_html)
@ -265,43 +266,31 @@ class AtcoderScraper(BaseScraper):
return "atcoder"
async def scrape_contest_metadata(self, contest_id: str) -> MetadataResult:
async def impl(cid: str) -> MetadataResult:
try:
rows = await asyncio.to_thread(_scrape_tasks_sync, cid)
except requests.HTTPError as e:
if e.response is not None and e.response.status_code == 404:
return self._create_metadata_error(
f"No problems found for contest {cid}", cid
)
raise
try:
rows = await asyncio.to_thread(_scrape_tasks_sync, contest_id)
problems = _to_problem_summaries(rows)
if not problems:
return self._create_metadata_error(
f"No problems found for contest {cid}", cid
return self._metadata_error(
f"No problems found for contest {contest_id}"
)
return MetadataResult(
success=True,
error="",
contest_id=cid,
contest_id=contest_id,
problems=problems,
url=f"https://atcoder.jp/contests/{contest_id}/tasks/{contest_id}_%s",
)
return await self._safe_execute("metadata", impl, contest_id)
except Exception as e:
return self._metadata_error(str(e))
async def scrape_contest_list(self) -> ContestListResult:
async def impl() -> ContestListResult:
try:
contests = await _fetch_all_contests_async()
except Exception as e:
return self._create_contests_error(str(e))
try:
contests = await _fetch_all_contests_async()
if not contests:
return self._create_contests_error("No contests found")
return self._contests_error("No contests found")
return ContestListResult(success=True, error="", contests=contests)
return await self._safe_execute("contests", impl)
except Exception as e:
return self._contests_error(str(e))
async def stream_tests_for_category_async(self, category_id: str) -> None:
rows = await asyncio.to_thread(_scrape_tasks_sync, category_id)
@ -313,16 +302,23 @@ class AtcoderScraper(BaseScraper):
return
data = await asyncio.to_thread(_scrape_problem_page_sync, category_id, slug)
tests: list[TestCase] = data.get("tests", [])
combined_input = "\n".join(t.input for t in tests) if tests else ""
combined_expected = "\n".join(t.expected for t in tests) if tests else ""
print(
json.dumps(
{
"problem_id": letter,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [
{"input": t.input, "expected": t.expected} for t in tests
],
"timeout_ms": data.get("timeout_ms", 0),
"memory_mb": data.get("memory_mb", 0),
"interactive": bool(data.get("interactive")),
"multi_test": False,
}
),
flush=True,
@ -364,6 +360,7 @@ async def main_async() -> int:
success=False,
error="Usage: atcoder.py tests <contest_id>",
problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[],
timeout_ms=0,
memory_mb=0,

View file

@ -1,9 +1,8 @@
import asyncio
import sys
from abc import ABC, abstractmethod
from typing import Any, Awaitable, Callable, ParamSpec, cast
from .models import ContestListResult, MetadataResult, TestsResult
P = ParamSpec("P")
from .models import CombinedTest, ContestListResult, MetadataResult, TestsResult
class BaseScraper(ABC):
@ -20,54 +19,65 @@ class BaseScraper(ABC):
@abstractmethod
async def stream_tests_for_category_async(self, category_id: str) -> None: ...
def _create_metadata_error(
self, error_msg: str, contest_id: str = ""
) -> MetadataResult:
return MetadataResult(
success=False,
error=f"{self.platform_name}: {error_msg}",
contest_id=contest_id,
problems=[],
url="",
)
def _usage(self) -> str:
name = self.platform_name
return f"Usage: {name}.py metadata <id> | tests <id> | contests"
def _create_tests_error(
self, error_msg: str, problem_id: str = "", url: str = ""
) -> TestsResult:
def _metadata_error(self, msg: str) -> MetadataResult:
return MetadataResult(success=False, error=msg, url="")
def _tests_error(self, msg: str) -> TestsResult:
return TestsResult(
success=False,
error=f"{self.platform_name}: {error_msg}",
problem_id=problem_id,
error=msg,
problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[],
timeout_ms=0,
memory_mb=0,
interactive=False,
)
def _create_contests_error(self, error_msg: str) -> ContestListResult:
return ContestListResult(
success=False,
error=f"{self.platform_name}: {error_msg}",
contests=[],
)
def _contests_error(self, msg: str) -> ContestListResult:
return ContestListResult(success=False, error=msg)
async def _safe_execute(
self,
operation: str,
func: Callable[P, Awaitable[Any]],
*args: P.args,
**kwargs: P.kwargs,
):
try:
return await func(*args, **kwargs)
except Exception as e:
if operation == "metadata":
contest_id = cast(str, args[0]) if args else ""
return self._create_metadata_error(str(e), contest_id)
elif operation == "tests":
problem_id = cast(str, args[1]) if len(args) > 1 else ""
return self._create_tests_error(str(e), problem_id)
elif operation == "contests":
return self._create_contests_error(str(e))
else:
raise
async def _run_cli_async(self, args: list[str]) -> int:
if len(args) < 2:
print(self._metadata_error(self._usage()).model_dump_json())
return 1
mode = args[1]
match mode:
case "metadata":
if len(args) != 3:
print(self._metadata_error(self._usage()).model_dump_json())
return 1
result = await self.scrape_contest_metadata(args[2])
print(result.model_dump_json())
return 0 if result.success else 1
case "tests":
if len(args) != 3:
print(self._tests_error(self._usage()).model_dump_json())
return 1
await self.stream_tests_for_category_async(args[2])
return 0
case "contests":
if len(args) != 2:
print(self._contests_error(self._usage()).model_dump_json())
return 1
result = await self.scrape_contest_list()
print(result.model_dump_json())
return 0 if result.success else 1
case _:
print(
self._metadata_error(
f"Unknown mode: {mode}. {self._usage()}"
).model_dump_json()
)
return 1
def run_cli(self) -> None:
sys.exit(asyncio.run(self._run_cli_async(sys.argv)))

253
scrapers/codechef.py Normal file
View file

@ -0,0 +1,253 @@
#!/usr/bin/env python3
import asyncio
import json
import re
from typing import Any
import httpx
from curl_cffi import requests as curl_requests
from .base import BaseScraper
from .models import (
ContestListResult,
ContestSummary,
MetadataResult,
ProblemSummary,
TestCase,
)
BASE_URL = "https://www.codechef.com"
API_CONTESTS_ALL = "/api/list/contests/all"
API_CONTEST = "/api/contests/{contest_id}"
API_PROBLEM = "/api/contests/{contest_id}/problems/{problem_id}"
PROBLEM_URL = "https://www.codechef.com/problems/{problem_id}"
HEADERS = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
}
TIMEOUT_S = 15.0
CONNECTIONS = 8
MEMORY_LIMIT_RE = re.compile(
r"Memory\s+[Ll]imit.*?([0-9.]+)\s*(MB|GB)", re.IGNORECASE | re.DOTALL
)
async def fetch_json(client: httpx.AsyncClient, path: str) -> dict:
r = await client.get(BASE_URL + path, headers=HEADERS, timeout=TIMEOUT_S)
r.raise_for_status()
return r.json()
def _extract_memory_limit(html: str) -> float:
m = MEMORY_LIMIT_RE.search(html)
if not m:
return 256.0
value = float(m.group(1))
unit = m.group(2).upper()
if unit == "GB":
return value * 1024.0
return value
def _fetch_html_sync(url: str) -> str:
response = curl_requests.get(url, impersonate="chrome", timeout=TIMEOUT_S)
response.raise_for_status()
return response.text
class CodeChefScraper(BaseScraper):
@property
def platform_name(self) -> str:
return "codechef"
async def scrape_contest_metadata(self, contest_id: str) -> MetadataResult:
try:
async with httpx.AsyncClient() as client:
data = await fetch_json(
client, API_CONTEST.format(contest_id=contest_id)
)
if not data.get("problems"):
return self._metadata_error(
f"No problems found for contest {contest_id}"
)
problems = []
for problem_code, problem_data in data["problems"].items():
if problem_data.get("category_name") == "main":
problems.append(
ProblemSummary(
id=problem_code,
name=problem_data.get("name", problem_code),
)
)
return MetadataResult(
success=True,
error="",
contest_id=contest_id,
problems=problems,
url=f"{BASE_URL}/{contest_id}",
)
except Exception as e:
return self._metadata_error(f"Failed to fetch contest {contest_id}: {e}")
async def scrape_contest_list(self) -> ContestListResult:
async with httpx.AsyncClient() as client:
try:
data = await fetch_json(client, API_CONTESTS_ALL)
except httpx.HTTPStatusError as e:
return self._contests_error(f"Failed to fetch contests: {e}")
all_contests = data.get("future_contests", []) + data.get(
"past_contests", []
)
max_num = 0
for contest in all_contests:
contest_code = contest.get("contest_code", "")
if contest_code.startswith("START"):
match = re.match(r"START(\d+)", contest_code)
if match:
num = int(match.group(1))
max_num = max(max_num, num)
if max_num == 0:
return self._contests_error("No Starters contests found")
contests = []
sem = asyncio.Semaphore(CONNECTIONS)
async def fetch_divisions(i: int) -> list[ContestSummary]:
parent_id = f"START{i}"
async with sem:
try:
parent_data = await fetch_json(
client, API_CONTEST.format(contest_id=parent_id)
)
except Exception as e:
import sys
print(f"Error fetching {parent_id}: {e}", file=sys.stderr)
return []
child_contests = parent_data.get("child_contests", {})
if not child_contests:
return []
base_name = f"Starters {i}"
divisions = []
for div_key, div_data in child_contests.items():
div_code = div_data.get("contest_code", "")
div_num = div_data.get("div", {}).get("div_number", "")
if div_code and div_num:
divisions.append(
ContestSummary(
id=div_code,
name=base_name,
display_name=f"{base_name} (Div. {div_num})",
)
)
return divisions
tasks = [fetch_divisions(i) for i in range(1, max_num + 1)]
for coro in asyncio.as_completed(tasks):
divisions = await coro
contests.extend(divisions)
return ContestListResult(success=True, error="", contests=contests)
async def stream_tests_for_category_async(self, category_id: str) -> None:
async with httpx.AsyncClient(
limits=httpx.Limits(max_connections=CONNECTIONS)
) as client:
try:
contest_data = await fetch_json(
client, API_CONTEST.format(contest_id=category_id)
)
except Exception as e:
print(
json.dumps(
{"error": f"Failed to fetch contest {category_id}: {str(e)}"}
),
flush=True,
)
return
all_problems = contest_data.get("problems", {})
if not all_problems:
print(
json.dumps(
{"error": f"No problems found for contest {category_id}"}
),
flush=True,
)
return
problems = {
code: data
for code, data in all_problems.items()
if data.get("category_name") == "main"
}
if not problems:
print(
json.dumps(
{"error": f"No main problems found for contest {category_id}"}
),
flush=True,
)
return
sem = asyncio.Semaphore(CONNECTIONS)
async def run_one(problem_code: str) -> dict[str, Any]:
async with sem:
try:
problem_data = await fetch_json(
client,
API_PROBLEM.format(
contest_id=category_id, problem_id=problem_code
),
)
sample_tests = (
problem_data.get("problemComponents", {}).get(
"sampleTestCases", []
)
or []
)
tests = [
TestCase(
input=t.get("input", "").strip(),
expected=t.get("output", "").strip(),
)
for t in sample_tests
if not t.get("isDeleted", False)
]
time_limit_str = problem_data.get("max_timelimit", "1")
timeout_ms = int(float(time_limit_str) * 1000)
problem_url = PROBLEM_URL.format(problem_id=problem_code)
loop = asyncio.get_event_loop()
html = await loop.run_in_executor(
None, _fetch_html_sync, problem_url
)
memory_mb = _extract_memory_limit(html)
interactive = False
except Exception:
tests = []
timeout_ms = 1000
memory_mb = 256.0
interactive = False
combined_input = "\n".join(t.input for t in tests) if tests else ""
combined_expected = (
"\n".join(t.expected for t in tests) if tests else ""
)
return {
"problem_id": problem_code,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [
{"input": t.input, "expected": t.expected} for t in tests
],
"timeout_ms": timeout_ms,
"memory_mb": memory_mb,
"interactive": interactive,
"multi_test": False,
}
tasks = [run_one(problem_code) for problem_code in problems.keys()]
for coro in asyncio.as_completed(tasks):
payload = await coro
print(json.dumps(payload), flush=True)
if __name__ == "__main__":
CodeChefScraper().run_cli()

View file

@ -2,14 +2,12 @@
import asyncio
import json
import logging
import re
import sys
from typing import Any
import requests
from bs4 import BeautifulSoup, Tag
from scrapling.fetchers import StealthyFetcher
from curl_cffi import requests as curl_requests
from .base import BaseScraper
from .models import (
@ -18,13 +16,8 @@ from .models import (
MetadataResult,
ProblemSummary,
TestCase,
TestsResult,
)
# suppress scrapling logging - https://github.com/D4Vinci/Scrapling/issues/31)
logging.getLogger("scrapling").setLevel(logging.CRITICAL)
BASE_URL = "https://codeforces.com"
API_CONTEST_LIST_URL = f"{BASE_URL}/api/contest.list"
TIMEOUT_SECONDS = 30
@ -83,19 +76,19 @@ def _extract_title(block: Tag) -> tuple[str, str]:
return parts[0].strip().upper(), parts[1].strip()
def _extract_samples(block: Tag) -> list[TestCase]:
def _extract_samples(block: Tag) -> tuple[list[TestCase], bool]:
st = block.find("div", class_="sample-test")
if not st:
return []
if not isinstance(st, Tag):
return [], False
input_pres: list[Tag] = [ # type: ignore[misc]
inp.find("pre") # type: ignore[misc]
for inp in st.find_all("div", class_="input") # type: ignore[union-attr]
input_pres: list[Tag] = [
inp.find("pre")
for inp in st.find_all("div", class_="input")
if isinstance(inp, Tag) and inp.find("pre")
]
output_pres: list[Tag] = [
out.find("pre") # type: ignore[misc]
for out in st.find_all("div", class_="output") # type: ignore[union-attr]
out.find("pre")
for out in st.find_all("div", class_="output")
if isinstance(out, Tag) and out.find("pre")
]
input_pres = [p for p in input_pres if isinstance(p, Tag)]
@ -119,18 +112,19 @@ def _extract_samples(block: Tag) -> list[TestCase]:
outputs_by_gid.pop(0, None)
keys = sorted(set(inputs_by_gid.keys()) & set(outputs_by_gid.keys()))
if keys:
return [
samples = [
TestCase(
input="\n".join(inputs_by_gid[k]).strip(),
expected="\n".join(outputs_by_gid[k]).strip(),
)
for k in keys
]
return samples, True
inputs = [_text_from_pre(p) for p in input_pres]
outputs = [_text_from_pre(p) for p in output_pres]
n = min(len(inputs), len(outputs))
return [TestCase(input=inputs[i], expected=outputs[i]) for i in range(n)]
return [TestCase(input=inputs[i], expected=outputs[i]) for i in range(n)], False
def _is_interactive(block: Tag) -> bool:
@ -141,12 +135,9 @@ def _is_interactive(block: Tag) -> bool:
def _fetch_problems_html(contest_id: str) -> str:
url = f"{BASE_URL}/contest/{contest_id}/problems"
page = StealthyFetcher.fetch(
url,
headless=True,
solve_cloudflare=True,
)
return page.html_content
response = curl_requests.get(url, impersonate="chrome", timeout=TIMEOUT_SECONDS)
response.raise_for_status()
return response.text
def _parse_all_blocks(html: str) -> list[dict[str, Any]]:
@ -156,20 +147,38 @@ def _parse_all_blocks(html: str) -> list[dict[str, Any]]:
for b in blocks:
holder = b.find_parent("div", class_="problemindexholder")
letter = (holder.get("problemindex") if holder else "").strip().upper()
name = _extract_title(b)[1] # keep your name extraction
name = _extract_title(b)[1]
if not letter:
continue
tests = _extract_samples(b)
raw_samples, is_grouped = _extract_samples(b)
timeout_ms, memory_mb = _extract_limits(b)
interactive = _is_interactive(b)
if is_grouped and raw_samples:
combined_input = f"{len(raw_samples)}\n" + "\n".join(
tc.input for tc in raw_samples
)
combined_expected = "\n".join(tc.expected for tc in raw_samples)
individual_tests = [
TestCase(input=f"1\n{tc.input}", expected=tc.expected)
for tc in raw_samples
]
else:
combined_input = "\n".join(tc.input for tc in raw_samples)
combined_expected = "\n".join(tc.expected for tc in raw_samples)
individual_tests = raw_samples
out.append(
{
"letter": letter,
"name": name,
"tests": tests,
"combined_input": combined_input,
"combined_expected": combined_expected,
"tests": individual_tests,
"timeout_ms": timeout_ms,
"memory_mb": memory_mb,
"interactive": interactive,
"multi_test": is_grouped,
}
)
return out
@ -191,49 +200,46 @@ class CodeforcesScraper(BaseScraper):
return "codeforces"
async def scrape_contest_metadata(self, contest_id: str) -> MetadataResult:
async def impl(cid: str) -> MetadataResult:
problems = await asyncio.to_thread(_scrape_contest_problems_sync, cid)
try:
problems = await asyncio.to_thread(
_scrape_contest_problems_sync, contest_id
)
if not problems:
return self._create_metadata_error(
f"No problems found for contest {cid}", cid
return self._metadata_error(
f"No problems found for contest {contest_id}"
)
return MetadataResult(
success=True,
error="",
contest_id=cid,
contest_id=contest_id,
problems=problems,
url=f"https://codeforces.com/contest/{contest_id}/%s",
url=f"https://codeforces.com/contest/{contest_id}/problem/%s",
)
return await self._safe_execute("metadata", impl, contest_id)
except Exception as e:
return self._metadata_error(str(e))
async def scrape_contest_list(self) -> ContestListResult:
async def impl() -> ContestListResult:
try:
r = requests.get(API_CONTEST_LIST_URL, timeout=TIMEOUT_SECONDS)
r.raise_for_status()
data = r.json()
if data.get("status") != "OK":
return self._create_contests_error("Invalid API response")
try:
r = requests.get(API_CONTEST_LIST_URL, timeout=TIMEOUT_SECONDS)
r.raise_for_status()
data = r.json()
if data.get("status") != "OK":
return self._contests_error("Invalid API response")
contests: list[ContestSummary] = []
for c in data["result"]:
if c.get("phase") != "FINISHED":
continue
cid = str(c["id"])
name = c["name"]
contests.append(
ContestSummary(id=cid, name=name, display_name=name)
)
contests: list[ContestSummary] = []
for c in data["result"]:
if c.get("phase") != "FINISHED":
continue
cid = str(c["id"])
name = c["name"]
contests.append(ContestSummary(id=cid, name=name, display_name=name))
if not contests:
return self._create_contests_error("No contests found")
if not contests:
return self._contests_error("No contests found")
return ContestListResult(success=True, error="", contests=contests)
except Exception as e:
return self._create_contests_error(str(e))
return await self._safe_execute("contests", impl)
return ContestListResult(success=True, error="", contests=contests)
except Exception as e:
return self._contests_error(str(e))
async def stream_tests_for_category_async(self, category_id: str) -> None:
html = await asyncio.to_thread(_fetch_problems_html, category_id)
@ -246,84 +252,22 @@ class CodeforcesScraper(BaseScraper):
json.dumps(
{
"problem_id": pid,
"combined": {
"input": b.get("combined_input", ""),
"expected": b.get("combined_expected", ""),
},
"tests": [
{"input": t.input, "expected": t.expected} for t in tests
],
"timeout_ms": b.get("timeout_ms", 0),
"memory_mb": b.get("memory_mb", 0),
"interactive": bool(b.get("interactive")),
"multi_test": bool(b.get("multi_test", False)),
}
),
flush=True,
)
async def main_async() -> int:
if len(sys.argv) < 2:
result = MetadataResult(
success=False,
error="Usage: codeforces.py metadata <contest_id> OR codeforces.py tests <contest_id> OR codeforces.py contests",
url="",
)
print(result.model_dump_json())
return 1
mode: str = sys.argv[1]
scraper = CodeforcesScraper()
if mode == "metadata":
if len(sys.argv) != 3:
result = MetadataResult(
success=False,
error="Usage: codeforces.py metadata <contest_id>",
url="",
)
print(result.model_dump_json())
return 1
contest_id = sys.argv[2]
result = await scraper.scrape_contest_metadata(contest_id)
print(result.model_dump_json())
return 0 if result.success else 1
if mode == "tests":
if len(sys.argv) != 3:
tests_result = TestsResult(
success=False,
error="Usage: codeforces.py tests <contest_id>",
problem_id="",
tests=[],
timeout_ms=0,
memory_mb=0,
)
print(tests_result.model_dump_json())
return 1
contest_id = sys.argv[2]
await scraper.stream_tests_for_category_async(contest_id)
return 0
if mode == "contests":
if len(sys.argv) != 2:
contest_result = ContestListResult(
success=False, error="Usage: codeforces.py contests"
)
print(contest_result.model_dump_json())
return 1
contest_result = await scraper.scrape_contest_list()
print(contest_result.model_dump_json())
return 0 if contest_result.success else 1
result = MetadataResult(
success=False,
error="Unknown mode. Use 'metadata <contest_id>', 'tests <contest_id>', or 'contests'",
url="",
)
print(result.model_dump_json())
return 1
def main() -> None:
sys.exit(asyncio.run(main_async()))
if __name__ == "__main__":
main()
CodeforcesScraper().run_cli()

View file

@ -3,7 +3,6 @@
import asyncio
import json
import re
import sys
from typing import Any
import httpx
@ -15,7 +14,6 @@ from .models import (
MetadataResult,
ProblemSummary,
TestCase,
TestsResult,
)
BASE_URL = "https://cses.fi"
@ -233,14 +231,25 @@ class CSESScraper(BaseScraper):
except Exception:
tests = []
timeout_ms, memory_mb, interactive = 0, 0, False
combined_input = "\n".join(t.input for t in tests) if tests else ""
combined_expected = (
"\n".join(t.expected for t in tests) if tests else ""
)
return {
"problem_id": pid,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [
{"input": t.input, "expected": t.expected} for t in tests
],
"timeout_ms": timeout_ms,
"memory_mb": memory_mb,
"interactive": interactive,
"multi_test": False,
}
tasks = [run_one(p.id) for p in problems]
@ -249,72 +258,5 @@ class CSESScraper(BaseScraper):
print(json.dumps(payload), flush=True)
async def main_async() -> int:
if len(sys.argv) < 2:
result = MetadataResult(
success=False,
error="Usage: cses.py metadata <category_id> OR cses.py tests <category> OR cses.py contests",
url="",
)
print(result.model_dump_json())
return 1
mode: str = sys.argv[1]
scraper = CSESScraper()
if mode == "metadata":
if len(sys.argv) != 3:
result = MetadataResult(
success=False,
error="Usage: cses.py metadata <category_id>",
url="",
)
print(result.model_dump_json())
return 1
category_id = sys.argv[2]
result = await scraper.scrape_contest_metadata(category_id)
print(result.model_dump_json())
return 0 if result.success else 1
if mode == "tests":
if len(sys.argv) != 3:
tests_result = TestsResult(
success=False,
error="Usage: cses.py tests <category>",
problem_id="",
tests=[],
timeout_ms=0,
memory_mb=0,
)
print(tests_result.model_dump_json())
return 1
category = sys.argv[2]
await scraper.stream_tests_for_category_async(category)
return 0
if mode == "contests":
if len(sys.argv) != 2:
contest_result = ContestListResult(
success=False, error="Usage: cses.py contests"
)
print(contest_result.model_dump_json())
return 1
contest_result = await scraper.scrape_contest_list()
print(contest_result.model_dump_json())
return 0 if contest_result.success else 1
result = MetadataResult(
success=False,
error=f"Unknown mode: {mode}. Use 'metadata <category>', 'tests <category>', or 'contests'",
url="",
)
print(result.model_dump_json())
return 1
def main() -> None:
sys.exit(asyncio.run(main_async()))
if __name__ == "__main__":
main()
CSESScraper().run_cli()

View file

@ -8,6 +8,13 @@ class TestCase(BaseModel):
model_config = ConfigDict(extra="forbid")
class CombinedTest(BaseModel):
input: str
expected: str
model_config = ConfigDict(extra="forbid")
class ProblemSummary(BaseModel):
id: str
name: str
@ -46,10 +53,12 @@ class ContestListResult(ScrapingResult):
class TestsResult(ScrapingResult):
problem_id: str
combined: CombinedTest
tests: list[TestCase] = Field(default_factory=list)
timeout_ms: int
memory_mb: float
interactive: bool = False
multi_test: bool = False
model_config = ConfigDict(extra="forbid")

View file

@ -1,11 +0,0 @@
describe('run module', function()
local run = require('cp.runner.run')
describe('basic functionality', function()
it('can get panel state', function()
local state = run.get_panel_state()
assert.is_table(state)
assert.is_table(state.test_cases)
end)
end)
end)

View file

@ -10,7 +10,7 @@ from typing import Any
import httpx
import pytest
import requests
from scrapling import fetchers
from curl_cffi import requests as curl_requests
ROOT = Path(__file__).resolve().parent.parent
FIX = Path(__file__).resolve().parent / "fixtures"
@ -63,13 +63,13 @@ def run_scraper_offline(fixture_text):
target = target.removeprefix("https://cses.fi")
if target.strip("/") == "problemset":
return fixture_text("cses_contests.html")
return fixture_text("cses/contests.html")
if target.startswith("/problemset/task/") or target.startswith(
"problemset/task/"
):
pid = target.rstrip("/").split("/")[-1]
return fixture_text(f"cses_task_{pid}.html")
return fixture_text(f"cses/task_{pid}.html")
raise AssertionError(f"No fixture for CSES path={path!r} url={url!r}")
@ -77,12 +77,12 @@ def run_scraper_offline(fixture_text):
if not url:
raise AssertionError("AtCoder expects url routing")
if "/contests/archive" in url:
return fixture_text("atcoder_contests.html")
return fixture_text("atcoder/contests.html")
if url.endswith("/tasks"):
return fixture_text("atcoder_abc100_tasks.html")
return fixture_text("atcoder/abc100_tasks.html")
if "/tasks/" in url:
slug = url.rsplit("/", 1)[-1]
return fixture_text(f"atcoder_task_{slug}.html")
return fixture_text(f"atcoder/task_{slug}.html")
raise AssertionError(f"No fixture for AtCoder url={url!r}")
def _router_codeforces(*, path: str | None = None, url: str | None = None) -> str:
@ -90,17 +90,17 @@ def run_scraper_offline(fixture_text):
raise AssertionError("Codeforces expects url routing")
if "/contest/" in url and url.endswith("/problems"):
contest_id = url.rstrip("/").split("/")[-2]
return fixture_text(f"codeforces_{contest_id}_problems.html")
return fixture_text(f"codeforces/{contest_id}_problems.html")
if "/contests" in url and "/problem/" not in url:
return fixture_text("codeforces_contests.html")
return fixture_text("codeforces/contests.html")
if "/problem/" in url:
parts = url.rstrip("/").split("/")
contest_id, index = parts[-3], parts[-1]
return fixture_text(f"codeforces_{contest_id}_{index}.html")
return fixture_text(f"codeforces/{contest_id}_{index}.html")
if "/problemset/problem/" in url:
parts = url.rstrip("/").split("/")
contest_id, index = parts[-2], parts[-1]
return fixture_text(f"codeforces_{contest_id}_{index}.html")
return fixture_text(f"codeforces/{contest_id}_{index}.html")
raise AssertionError(f"No fixture for Codeforces url={url!r}")
@ -136,12 +136,15 @@ def run_scraper_offline(fixture_text):
case "codeforces":
class MockPage:
class MockCurlResponse:
def __init__(self, html: str):
self.html_content = html
self.text = html
def _mock_stealthy_fetch(url: str, **kwargs):
return MockPage(_router_codeforces(url=url))
def raise_for_status(self):
pass
def _mock_curl_get(url: str, **kwargs):
return MockCurlResponse(_router_codeforces(url=url))
def _mock_requests_get(url: str, **kwargs):
if "api/contest.list" in url:
@ -172,37 +175,97 @@ def run_scraper_offline(fixture_text):
raise AssertionError(f"Unexpected requests.get call: {url}")
return {
"StealthyFetcher.fetch": _mock_stealthy_fetch,
"curl_requests.get": _mock_curl_get,
"requests.get": _mock_requests_get,
}
case "codechef":
class MockResponse:
def __init__(self, json_data):
self._json_data = json_data
self.status_code = 200
def json(self):
return self._json_data
def raise_for_status(self):
pass
async def __offline_get_async(client, url: str, **kwargs):
if "/api/list/contests/all" in url:
data = json.loads(fixture_text("codechef/contests.json"))
return MockResponse(data)
if "/api/contests/START" in url and "/problems/" not in url:
contest_id = url.rstrip("/").split("/")[-1]
try:
data = json.loads(
fixture_text(f"codechef/{contest_id}.json")
)
return MockResponse(data)
except FileNotFoundError:
raise AssertionError(f"No fixture for CodeChef url={url!r}")
if "/api/contests/START" in url and "/problems/" in url:
parts = url.rstrip("/").split("/")
contest_id = parts[-3]
problem_id = parts[-1]
data = json.loads(
fixture_text(f"codechef/{contest_id}_{problem_id}.json")
)
return MockResponse(data)
raise AssertionError(f"No fixture for CodeChef url={url!r}")
class MockCodeChefCurlResponse:
def __init__(self, html: str):
self.text = html
def raise_for_status(self):
pass
def _mock_curl_get(url: str, **kwargs):
if "/problems/" in url:
problem_id = url.rstrip("/").split("/")[-1]
html = fixture_text(f"codechef/{problem_id}.html")
return MockCodeChefCurlResponse(html)
raise AssertionError(f"No fixture for CodeChef url={url!r}")
return {
"__offline_get_async": __offline_get_async,
"curl_requests.get": _mock_curl_get,
}
case _:
raise AssertionError(f"Unknown scraper: {scraper_name}")
scraper_classes = {
"cses": "CSESScraper",
"atcoder": "AtcoderScraper",
"codeforces": "CodeforcesScraper",
"codechef": "CodeChefScraper",
}
def _run(scraper_name: str, mode: str, *args: str):
mod_path = ROOT / "scrapers" / f"{scraper_name}.py"
ns = _load_scraper_module(mod_path, scraper_name)
offline_fetches = _make_offline_fetches(scraper_name)
if scraper_name == "codeforces":
fetchers.StealthyFetcher.fetch = offline_fetches["StealthyFetcher.fetch"] # type: ignore[assignment]
curl_requests.get = offline_fetches["curl_requests.get"]
requests.get = offline_fetches["requests.get"]
elif scraper_name == "atcoder":
ns._fetch = offline_fetches["_fetch"]
ns._get_async = offline_fetches["_get_async"]
elif scraper_name == "cses":
httpx.AsyncClient.get = offline_fetches["__offline_fetch_text"] # type: ignore[assignment]
httpx.AsyncClient.get = offline_fetches["__offline_fetch_text"]
elif scraper_name == "codechef":
httpx.AsyncClient.get = offline_fetches["__offline_get_async"]
curl_requests.get = offline_fetches["curl_requests.get"]
main_async = getattr(ns, "main_async")
assert callable(main_async), f"main_async not found in {scraper_name}"
scraper_class = getattr(ns, scraper_classes[scraper_name])
scraper = scraper_class()
argv = [str(mod_path), mode, *args]
old_argv = sys.argv
sys.argv = argv
try:
rc, out = _capture_stdout(main_async())
finally:
sys.argv = old_argv
rc, out = _capture_stdout(scraper._run_cli_async(argv))
json_lines: list[Any] = []
for line in (_line for _line in out.splitlines() if _line.strip()):

4343
tests/fixtures/codechef/P1209.html vendored Normal file

File diff suppressed because it is too large Load diff

116
tests/fixtures/codechef/START209.json vendored Normal file
View file

@ -0,0 +1,116 @@
{
"status": "success",
"user": { "username": null },
"code": "START209",
"isRatedContest": "1",
"isParentContestRated": "0",
"name": "Starters 209 (Rated till 5 star)",
"problems": [],
"banner": "https:\/\/cdn.codechef.com\/download\/small-banner\/START209\/1760933061.png",
"rules": "<h4>CodeChef: A Platform for Aspiring Programmers<\/h4>\n<p class=\"last\">CodeChef was created as a platform to help programmers make it big in the world of algorithms, computer programming, and programming contests. At CodeChef, our dedicated efforts are aimed at reviving the inner geek within you, as we proudly host a thrilling programming (coding) contest every Wednesday.<\/p>\n<h4>About CodeChef Starters:<\/h4>\n<p>CodeChef Starters is a short programming contest which takes place on every Wednesday\u00a0<\/p>\n<h4>Contest Details:<\/h4>\n<ul class=\"last\">\n<li><strong>D<\/strong><strong>uration: <\/strong>\u00a02.00 hours\u00a0<\/li>\n<li><strong>Start Date: <\/strong>Wednesday, 22nd October , 2025 at 20:00 HRS (IST)<\/li>\n<li><strong>End Date: <\/strong>Wednesday, 22nd October, 2025 at 22:00 HRS (IST)<\/li>\n<li>Check your timezone <a href=\"https:\/\/www.timeanddate.com\/worldclock\/fixedtime.html?msg=CodeChef+Starters+209&amp;iso=20251022T20&amp;p1=44&amp;ah=2\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">here<\/a>.<\/li>\n<\/ul>\n<h4>Eligibility Criteria: Anyone with a knack for programming<\/h4>\n<p class=\"last\">Our contests are open to all programmers across the globe.<\/p>\n<h4>What's in it for you?<\/h4>\n<p>The idea behind these programming contests is that we want you to learn while competing. Also, we believe that it is alright to refer to tutorials, books, and other materials, learn a concept, and then apply the same to solve a problem during a contest. But it is <strong>not alright to copy other people's solutions or seek other people's help to solve a problem. <\/strong>All the participants are expected to abide to <a class=\"button blue\" href=\"..\/codeofconduct\">CodeChef's Code Of Conduct<\/a>.<\/p>\n<h4>Rules and Regulations:<\/h4>\n<ul>\n<li>This is an IOI-style contest. This means that the problems will be partially graded. You will get the score for passing certain test data.<\/li>\n<li>The details of the failed test cases will also be visible on your solution page.<\/li>\n<li>You can submit solutions as many times as you'd like, there are no penalties for incorrect submissions. Only your best correct submission will be considered.<\/li>\n<li>Those who achieve the score first will be placed higher in the ranklist in case of a tie.<\/li>\n<li><strong>We have removed all the Institutions that we could not identify from our database. We request you to update your institutions once again by going to your profile page.<\/strong><\/li>\n<li>You can also send in your queries in an email to <a href=\"mailto:help@codechef.com\" target=\"_blank\" rel=\"noreferrer noopener\">help@codechef.com<\/a>, during the contest.<\/li>\n<li>Please do not discuss strategy, suggestions, or tips in the comments during a live contest. Posting questions clarifying the problem statement is ok. If you are unsure, email us at <a href=\"mailto:feedback@codechef.com\" target=\"_blank\" rel=\"noreferrer noopener\"> feedback@codechef.com<\/a>.<\/li>\n<li>Discussing CodeChef's problems or any aspect of a problem, on any other platform on the web, on identification, could lead to the disabling of the respective account and banning from the community.<\/li>\n<\/ul>\n<p><strong>Note: You can now \"Code, Compile, and Run\" your codes on our <a href=\"..\/ide\">Online IDE<\/a>.<\/strong><\/p>\n<p>However, if you are using any other online development environment, make sure that other contestants don't have access to your code. As a contestant, you are responsible for making sure others don't access the code that you submit. If you use Ideone, make sure to mark your submission \"private\" (not secret)\".<\/p>",
"time": {
"start": 1761143400,
"end": 1761150600,
"freezing": 0,
"current": 1761370410
},
"ip": "2603:7000:3900:1358:3959:b692:6cf3:cb03",
"announcements": "<p><strong>CodeChef \u00d7 Coding Club League (2025-26)<\/strong><br \/><br \/>Partner with CodeChef to build a strong coding culture on campus!<\/p>\n<p><strong>Benefits for Clubs:<\/strong><\/p>\n<ul>\n<li>Platform access and support for Annual Technical events \/ hackathons<\/li>\n<li>Pro access for winners<\/li>\n<li>Dashboard to track member progress<\/li>\n<li>Discounts on CodeChef Pro for all members<\/li>\n<li>Co-branding &amp; promotion on CodeChef channels<br \/><br \/>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0<strong style=\"text-align:center;\"><a class=\"button blue\" href=\"codechef-coding-club\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0Click Here To Know More<\/a><\/strong><\/li>\n<\/ul>\n<p><strong>\u00a0<\/strong><\/p>",
"problemsstats": {
"attempted": [],
"partially_solved": [],
"solved": [],
"locked": []
},
"todos": [],
"stats": null,
"partial_scores": [],
"isRanklistFrozen": false,
"rank_and_score": { "score": "NA", "rank": "NA" },
"is_a_parent_contest": true,
"is_contest_elements_visible": true,
"is_OTP_required": false,
"is_linked_problems_contest": "0",
"custom_contest_page_title": "",
"custom_contest_page_meta_desc": "",
"contest_introduction": "https:\/\/discuss.codechef.com\/t\/invitation-to-codechef-starters-209-rated-upto-5-stars-22nd-october\/124401",
"contest_editorials": "https:\/\/discuss.codechef.com\/tag\/start209",
"contest_video_editorials": "",
"is_older_rating_based_division_system": false,
"division_generation": 3,
"isAssessmentContest": false,
"penalisedUsersCount": 0,
"ttl": 60,
"child_contests": {
"div_1": {
"div": {
"div_number": "1",
"code": "div_1",
"min_rating": 2000,
"max_rating": 50000,
"name": "Division 1",
"description": "Users with rating above 2000"
},
"division_generation": 3,
"contest_code": "START209A",
"contest_link": "\/START209A"
},
"div_2": {
"div": {
"div_number": "2",
"code": "div_2",
"min_rating": 1600,
"max_rating": 1999,
"name": "Division 2",
"description": "Users with rating between 1600 and 1999"
},
"division_generation": 3,
"contest_code": "START209B",
"contest_link": "\/START209B"
},
"div_3": {
"div": {
"div_number": "3",
"code": "div_3",
"min_rating": 1400,
"max_rating": 1599,
"name": "Division 3",
"description": "Users with rating upto 1599"
},
"division_generation": 3,
"contest_code": "START209C",
"contest_link": "\/START209C"
},
"div_4": {
"div": {
"div_number": "4",
"code": "div_4",
"min_rating": 0,
"max_rating": 1399,
"name": "Division 4",
"description": "Users with rating upto 1399"
},
"division_generation": 3,
"contest_code": "START209D",
"contest_link": "\/START209D"
}
},
"user_rating_div": {
"rating": -1,
"div": {
"code": "all",
"min_rating": 0,
"max_rating": 50000,
"name": "All",
"description": "All the users"
}
},
"user_contest_code": null,
"show_div_based_contest": false,
"is_registration_enabled_contest": false,
"is_flexi_time_contest": false,
"duration": "120",
"is_proctored": false,
"autoRefresh": true,
"visitedContests": []
}

202
tests/fixtures/codechef/START209D.json vendored Normal file
View file

@ -0,0 +1,202 @@
{
"status": "success",
"user": { "username": null },
"code": "START209D",
"isRatedContest": "1",
"isParentContestRated": "1",
"name": "Starters 209 (Rated)",
"problems": {
"P1209": {
"code": "P1209",
"name": "Bitcoin Market",
"type": "3",
"successful_submissions": "25131",
"allow_submission": false,
"accuracy": 85.680000000000007,
"problem_url": "\/problems\/P1209",
"submit_url": "\/problems\/P1209",
"status_url": "\/status\/P1209",
"is_added_to_practice": true,
"total_submissions": "33093",
"category_name": "main",
"is_direct_submittable": false
},
"P2209": {
"code": "P2209",
"name": "Divisible Duel",
"type": "3",
"successful_submissions": "21888",
"allow_submission": false,
"accuracy": 64.159999999999997,
"problem_url": "\/problems\/P2209",
"submit_url": "\/problems\/P2209",
"status_url": "\/status\/P2209",
"is_added_to_practice": true,
"total_submissions": "37437",
"category_name": "main",
"is_direct_submittable": false
},
"P3209": {
"code": "P3209",
"name": "Small GCD Sort",
"type": "3",
"successful_submissions": "13450",
"allow_submission": false,
"accuracy": 76.239999999999995,
"problem_url": "\/problems\/P3209",
"submit_url": "\/problems\/P3209",
"status_url": "\/status\/P3209",
"is_added_to_practice": true,
"total_submissions": "19164",
"category_name": "main",
"is_direct_submittable": false
},
"P4209": {
"code": "P4209",
"name": "Tactical Conversion",
"type": "3",
"successful_submissions": "1567",
"allow_submission": false,
"accuracy": 8.4499999999999993,
"problem_url": "\/problems\/P4209",
"submit_url": "\/problems\/P4209",
"status_url": "\/status\/P4209",
"is_added_to_practice": true,
"total_submissions": "20535",
"category_name": "main",
"is_direct_submittable": false
},
"P5209": {
"code": "P5209",
"name": "Binary Love",
"type": "3",
"successful_submissions": "3271",
"allow_submission": false,
"accuracy": 33.530000000000001,
"problem_url": "\/problems\/P5209",
"submit_url": "\/problems\/P5209",
"status_url": "\/status\/P5209",
"is_added_to_practice": true,
"total_submissions": "11128",
"category_name": "main",
"is_direct_submittable": false
},
"P6209E": {
"code": "P6209E",
"name": "High Score (Easy Version)",
"type": "3",
"successful_submissions": "285",
"allow_submission": false,
"accuracy": 7.2800000000000002,
"problem_url": "\/problems\/P6209E",
"submit_url": "\/problems\/P6209E",
"status_url": "\/status\/P6209E",
"is_added_to_practice": true,
"total_submissions": "4535",
"category_name": "main",
"is_direct_submittable": false
},
"P6209": {
"code": "P6209",
"name": "High Score (Hard Version)",
"type": "3",
"successful_submissions": "34",
"allow_submission": false,
"accuracy": 3.1899999999999999,
"problem_url": "\/problems\/P6209",
"submit_url": "\/problems\/P6209",
"status_url": "\/status\/P6209",
"is_added_to_practice": true,
"total_submissions": "1159",
"category_name": "main",
"is_direct_submittable": false
},
"P7209": {
"code": "P7209",
"name": "Easy Grid Game",
"type": "3",
"successful_submissions": "80",
"allow_submission": false,
"accuracy": 5.1100000000000003,
"problem_url": "\/problems\/P7209",
"submit_url": "\/problems\/P7209",
"status_url": "\/status\/P7209",
"is_added_to_practice": true,
"total_submissions": "1740",
"category_name": "main",
"is_direct_submittable": false
},
"P8209": {
"code": "P8209",
"name": "Counting Is Fun",
"type": "3",
"successful_submissions": "22",
"allow_submission": false,
"accuracy": 1.8200000000000001,
"problem_url": "\/problems\/P8209",
"submit_url": "\/problems\/P8209",
"status_url": "\/status\/P8209",
"is_added_to_practice": true,
"total_submissions": "1261",
"category_name": "main",
"is_direct_submittable": false
}
},
"banner": "https:\/\/cdn.codechef.com\/download\/small-banner\/START209D\/1760933097.png",
"rules": "<h4>CodeChef: A Platform for Aspiring Programmers<\/h4>\n<p class=\"last\">CodeChef was created as a platform to help programmers make it big in the world of algorithms, computer programming, and programming contests. At CodeChef, our dedicated efforts are aimed at reviving the inner geek within you, as we proudly host a thrilling programming (coding) contest every Wednesday.<\/p>\n<h4>About CodeChef Starters:<\/h4>\n<p>CodeChef Starters is a short programming contest which takes place on every Wednesday\u00a0<\/p>\n<h4>Contest Details:<\/h4>\n<ul class=\"last\">\n<li><strong>D<\/strong><strong>uration: <\/strong>\u00a02.00 hours\u00a0<\/li>\n<li><strong>Start Date: <\/strong>Wednesday, 22nd October , 2025 at 20:00 HRS (IST)<\/li>\n<li><strong>End Date: <\/strong>Wednesday, 22nd October, 2025 at 22:00 HRS (IST)<\/li>\n<li>Check your timezone <a href=\"https:\/\/www.timeanddate.com\/worldclock\/fixedtime.html?msg=CodeChef+Starters+209&amp;iso=20251022T20&amp;p1=44&amp;ah=2\" target=\"_blank\" rel=\"nofollow noreferrer noopener\">here<\/a>.<\/li>\n<\/ul>\n<h4>Eligibility Criteria: Anyone with a knack for programming<\/h4>\n<p class=\"last\">Our contests are open to all programmers across the globe.<\/p>\n<h4>What's in it for you?<\/h4>\n<p>The idea behind these programming contests is that we want you to learn while competing. Also, we believe that it is alright to refer to tutorials, books, and other materials, learn a concept, and then apply the same to solve a problem during a contest. But it is <strong>not alright to copy other people's solutions or seek other people's help to solve a problem. <\/strong>All the participants are expected to abide to <a class=\"button blue\" href=\"..\/codeofconduct\">CodeChef's Code Of Conduct<\/a>.<\/p>\n<h4>Rules and Regulations:<\/h4>\n<ul>\n<li>This is an IOI-style contest. This means that the problems will be partially graded. You will get the score for passing certain test data.<\/li>\n<li>The details of the failed test cases will also be visible on your solution page.<\/li>\n<li>You can submit solutions as many times as you'd like, there are no penalties for incorrect submissions. Only your best correct submission will be considered.<\/li>\n<li>Those who achieve the score first will be placed higher in the ranklist in case of a tie.<\/li>\n<li><strong>We have removed all the Institutions that we could not identify from our database. We request you to update your institutions once again by going to your profile page.<\/strong><\/li>\n<li>You can also send in your queries in an email to <a href=\"mailto:help@codechef.com\" target=\"_blank\" rel=\"noreferrer noopener\">help@codechef.com<\/a>, during the contest.<\/li>\n<li>Please do not discuss strategy, suggestions, or tips in the comments during a live contest. Posting questions clarifying the problem statement is ok. If you are unsure, email us at <a href=\"mailto:feedback@codechef.com\" target=\"_blank\" rel=\"noreferrer noopener\"> feedback@codechef.com<\/a>.<\/li>\n<li>Discussing CodeChef's problems or any aspect of a problem, on any other platform on the web, on identification, could lead to the disabling of the respective account and banning from the community.<\/li>\n<\/ul>\n<p><strong>Note: You can now \"Code, Compile, and Run\" your codes on our <a href=\"..\/ide\">Online IDE<\/a>.<\/strong><\/p>\n<p>However, if you are using any other online development environment, make sure that other contestants don't have access to your code. As a contestant, you are responsible for making sure others don't access the code that you submit. If you use Ideone, make sure to mark your submission \"private\" (not secret)\".<\/p>",
"time": {
"start": 1761143406,
"end": 1761150606,
"freezing": 0,
"current": 1761365589
},
"ip": "2603:7000:3900:1358:3959:b692:6cf3:cb03",
"announcements": "<p><strong>CodeChef \u00d7 Coding Club League (2025-26)<\/strong><br \/><br \/>Partner with CodeChef to build a strong coding culture on campus!<\/p>\n<p><strong>Benefits for Clubs:<\/strong><\/p>\n<ul>\n<li>Platform access and support for Annual Technical events \/ hackathons<\/li>\n<li>Pro access for winners<\/li>\n<li>Dashboard to track member progress<\/li>\n<li>Discounts on CodeChef Pro for all members<\/li>\n<li>Co-branding &amp; promotion on CodeChef channels<br \/><br \/>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0<strong style=\"text-align:center;\"><a class=\"button blue\" href=\"codechef-coding-club\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0Click Here To Know More<\/a><\/strong><\/li>\n<\/ul>\n<p><strong>\u00a0<\/strong><\/p>\n<p>\u00a0<\/p>",
"problemsstats": {
"attempted": [],
"partially_solved": [],
"solved": [],
"locked": []
},
"todos": [],
"stats": null,
"partial_scores": {
"P7209": [{ "score": "100", "count": "80" }],
"P5209": [{ "score": "100", "count": "3271" }],
"P4209": [{ "score": "100", "count": "1567" }],
"P1209": [{ "score": "100", "count": "25131" }],
"P3209": [{ "score": "100", "count": "13450" }],
"P2209": [{ "score": "100", "count": "21888" }],
"P8209": [{ "score": "100", "count": "22" }],
"P6209": [{ "score": "100", "count": "34" }],
"P6209E": [{ "score": "100", "count": "285" }]
},
"isRanklistFrozen": false,
"rank_and_score": { "score": "NA", "rank": "NA" },
"is_a_parent_contest": false,
"is_contest_elements_visible": true,
"is_OTP_required": false,
"is_linked_problems_contest": "0",
"custom_contest_page_title": "",
"custom_contest_page_meta_desc": "",
"contest_introduction": "https:\/\/discuss.codechef.com\/t\/invitation-to-codechef-starters-209-rated-upto-5-stars-22nd-october\/124401",
"contest_editorials": "https:\/\/discuss.codechef.com\/tag\/start209",
"contest_video_editorials": "",
"is_older_rating_based_division_system": false,
"division_generation": 3,
"isAssessmentContest": false,
"penalisedUsersCount": 0,
"ttl": 60,
"scorable_heading": "Scorable Problems for Division 4",
"scorable_message": "",
"division": "Division 4",
"non_scorable_heading": "Non Scorable Problems for Practice",
"non_scorable_message": "<p>The following problems are <b>NOT part of the contest<\/b>, and will not be counted towards your rankings and ratings. These are problems from the other Division(s), made available for you to practice. Click <a href='\/blogs\/how-does-codechef-rating-system-work'>here<\/a> to know more. They will be considered for plagiarism though.<\/p>",
"is_registration_enabled_contest": false,
"is_flexi_time_contest": false,
"duration": "120",
"is_proctored": false,
"autoRefresh": true,
"visitedContests": [],
"user_live_ratings_update_frequency": 15
}

View file

@ -0,0 +1,99 @@
{
"category_name": "main",
"contest_code": "START209D",
"contest_name": "Starters 209 (Rated)",
"status": "success",
"submit_error": "You need to login to submit.",
"is_verified": false,
"problem_code": "P1209",
"contest_category": "9",
"problem_name": "Bitcoin Market",
"intended_contest_code": "START209",
"body": "This is an example problem statement in markdown, and a mini guide on writing statements. Please make sure to remove everything here before publishing your problem.\n\n- Codechef uses markdown for its problem statements. Markdown syntax can be found [here](https:\/\/github.com\/showdownjs\/showdown\/wiki\/Showdown's-Markdown-syntax). Note the `[text](link)` syntax to insert a hyperlink.\n- Codechef also uses $\\LaTeX$ to render mathematical expressions, and you are advised to make liberal use of it to make your statement look good.\n- Text can be made **bold** or *italicized*.\n- **Do not** use HTML tags (p, ul, li, pre, br, ...) in the statement.\n- To insert an image, first upload it to an online hosting service (for an official contest, ask a Codechef admin to do this for you \u2014 this is important) and then use the following syntax: `![alt text](link-to-image)`.\n- If your problem doesn't contain subtasks, ensure that the Subtasks section below is disabled and **all content is deleted from it**.\n\nIf you face any issues, either contact a Codechef admin directly or send us an email at help@codechef.com.\n\nBelow is an example problem statement that uses some of the above-mentioned features.\n\n---------\n\nChef has a simple undirected graph $G$ with $N$ vertices and $M$ edges. A [subgraph](https:\/\/mathworld.wolfram.com\/Subgraph.html) $H$ of $G$ is called *good* if:\n- $H$ is connected\n- $H$ contains all $N$ vertices of $G$\n- There is a unique path between any two vertices in $H$, using only edges in $H$\n\nCount the number of *good* subgraphs of $G$. Since this number might be large, report it modulo $10^9 + 7$.\n\nIn other news, here's a completely unrelated image:\n\n![](https:\/\/s3.amazonaws.com\/codechef_shared\/download\/Images\/START41\/ss3.png).\n\n\n<aside style='background: #f8f8f8;padding: 10px 15px;'><div>All submissions for this problem are available.<\/div><\/aside>",
"problemComponents": {
"constraints": "- $1 \\leq R \\leq 10$",
"constraintsState": true,
"subtasks": "- **Subtask 1 (10 points):** $1 \\leq M \\leq 10$\n- **Subtask 2 (20 points):** The sum of $N$ across all test cases won't exceed $20$.\n- **Subtask 3 (70 points):** No further constraints.",
"subtasksState": false,
"statement": "Chef has recently started investing in **Bitcoin**. \nHe assigns a **market risk level** $R$ (from $1$ to $10$), where: \n\n- $1$ means the market is *very safe*, \n- $10$ means the market is *very risky*. \n\nChef will **buy Bitcoin** only if the risk level is **$4$ or less**. \n\nGiven the current risk level $R$, determine whether Chef should buy Bitcoin.\n\nPrint **\"YES\"** if Chef should buy, otherwise print **\"NO\"**.",
"inputFormat": "- The first and only line of input contains a single integer $R$ \u2014 the current market risk level.",
"inputFormatState": true,
"outputFormat": "Print `YES` if Chef should buy Bitcoin, Otherwise, print `NO`.\n\nYou may print each character of the string in uppercase or lowercase (for example, the strings `YES`, `yEs`, `yes`, and `yeS` will all be treated as identical).\n",
"outputFormatState": true,
"sampleTestCases": [
{
"id": "1",
"input": "2",
"output": "YES",
"explanation": "The current market risk is $2$. \nSince $2$ is not larger than $4$, the risk is small enough, and Chef will buy Bitcoin.",
"isDeleted": false
},
{
"id": "2",
"input": "4",
"output": "YES",
"explanation": "The current market risk is $4$. \nSince $4$ is not larger than $4$, the risk is small enough, and Chef will buy Bitcoin.",
"isDeleted": false
},
{
"id": "3",
"input": "5",
"output": "NO",
"explanation": "The current market risk is $5$. \nSince $5$ is larger than $4$, the risk is too much, and Chef will **not** buy Bitcoin.",
"isDeleted": false
}
]
},
"gumlet_video_url": "",
"video_editorial_url": "https:\/\/youtu.be\/tjUCV9Ld1Kw?si=minop9943wecj1bh",
"text_editorial_body": "<h1><a name=\"problem-link-1\" class=\"anchor\" href=\"#problem-link-1\"><\/a>PROBLEM LINK:<\/h1>\n<p><a href=\"https:\/\/www.codechef.com\/problems\/P1209\">Practice<\/a><br>\n<a href=\"https:\/\/www.codechef.com\/START209A\/problems\/P1209\">Contest: Division 1<\/a><br>\n<a href=\"https:\/\/www.codechef.com\/START209B\/problems\/P1209\">Contest: Division 2<\/a><br>\n<a href=\"https:\/\/www.codechef.com\/START209C\/problems\/P1209\">Contest: Division 3<\/a><br>\n<a href=\"https:\/\/www.codechef.com\/START209D\/problems\/P1209\">Contest: Division 4<\/a><\/p>\n<p><em><strong>Author:<\/strong><\/em> <a href=\"https:\/\/www.codechef.com\/users\/pols_agyi_pols\">pols_agyi_pols<\/a><br>\n<em><strong>Tester:<\/strong><\/em> <a href=\"https:\/\/www.codechef.com\/users\/kingmessi\">kingmessi<\/a><br>\n<em><strong>Editorialist:<\/strong><\/em> <a href=\"https:\/\/www.codechef.com\/users\/iceknight1093\">iceknight1093<\/a><\/p>\n<h1><a name=\"difficulty-2\" class=\"anchor\" href=\"#difficulty-2\"><\/a>DIFFICULTY:<\/h1>\n<p>Cakewalk<\/p>\n<h1><a name=\"prerequisites-3\" class=\"anchor\" href=\"#prerequisites-3\"><\/a>PREREQUISITES:<\/h1>\n<p>None<\/p>\n<h1><a name=\"problem-4\" class=\"anchor\" href=\"#problem-4\"><\/a>PROBLEM:<\/h1>\n<p>Chef will buy bitcoin if the market risk level is no more than <span class=\"math\">4<\/span>.<br>\nThe current market risk level is <span class=\"math\">R<\/span>.<br>\nWill Chef buy bitcoin?<\/p>\n<h1><a name=\"explanation-5\" class=\"anchor\" href=\"#explanation-5\"><\/a>EXPLANATION:<\/h1>\n<p>The answer is <code>Yes<\/code> if <span class=\"math\">R \\le 4<\/span> and <code>No<\/code> otherwise.<br>\nThis can be checked using an <code>if<\/code> condition.<\/p>\n<h1><a name=\"time-complexity-6\" class=\"anchor\" href=\"#time-complexity-6\"><\/a>TIME COMPLEXITY:<\/h1>\n<p><span class=\"math\">\\mathcal{O}(1)<\/span> per testcase.<\/p>\n<h1><a name=\"code-7\" class=\"anchor\" href=\"#code-7\"><\/a>CODE:<\/h1>\n<details>\n<summary>\nEditorialist's code (PyPy3)<\/summary>\n<pre><code class=\"lang-python\">r = int(input())\nprint('Yes' if r &lt;= 4 else 'No')\n<\/code><\/pre>\n<\/details>",
"text_editorial_is_markdown": 0,
"text_editorial_topic_id": 124410,
"languages_supported": "CPP20, PYTH 3, C, JAVA, PYP3, CS2, NODEJS, GO, TS, PHP, kotlin, rust, R",
"max_timelimit": "1",
"source_sizelimit": "50000",
"problem_author": "archit_adm",
"problem_display_authors": ["archit_adm"],
"problem_display_authors_html_handle": "<div class=\"multiple-usernames-container\"><a href='\/users\/archit_adm'>archit_adm<\/a><\/div>",
"problem_tester": null,
"problem_testers_usernames": ["kingmessi"],
"problem_tester_html_handle": "<div class=\"multiple-usernames-container\"><a href='\/users\/kingmessi'><span \n class='rating' \n style='display: inline-block; \n font-size: 10px; \n background: #D0011B;\n padding: 0 3px; \n line-height: 1.3; \n color: white;\n margin-right: 2px;'>7&#9733;<\/span><span class='m-username--link'>kingmessi<\/span><\/a><\/div>",
"problem_editorialist": "iceknight1093",
"date_added": "20-10-2025",
"ready_for_debug": false,
"problem_stats": {
"accuracy": 85.780000000000001,
"successful_submissions": "25325",
"total_submissions": "33327"
},
"user_tags": ["archit_adm", "cakewalk", "start209"],
"computed_tags": [],
"difficulty_rating": "172",
"best_tag": "",
"editorial_url": "",
"time": {
"view_start_date": 1761143406,
"submit_start_date": 1761143406,
"visible_start_date": 1761150606,
"end_date": 1761150606,
"current": 1761365589,
"practice_submission_allowed": false
},
"user": { "username": null, "access": "default", "isPremiumUser": false },
"bookmark_status": false,
"contest_problem_status": "unattempted",
"problem_status": "unattempted",
"is_direct_submittable": false,
"problemDiscussURL": "https:\/\/discuss.codechef.com\/search?q=P1209",
"is_a_practice_or_college_contest": false,
"votes_data": {
"SolutionVoteData": { "upvote_count": 0, "user_vote": 0 },
"HintsVoteData": { "upvote_count": 0, "user_vote": 0 },
"ProblemStatementVoteData": { "upvote_count": 26, "user_vote": 0 },
"DoubtSupportVoteData": { "upvote_count": 0, "user_vote": 0 }
},
"is_proctored": false,
"is_user_verified_for_proctoring": false,
"visitedContests": [],
"isSupportedByJudge": true
}

330
tests/fixtures/codechef/contests.json vendored Normal file
View file

@ -0,0 +1,330 @@
{
"status": "success",
"message": "All contests list",
"present_contests": [
{
"contest_code": "DEVWEEKEND21",
"contest_name": "Weekend Dev Challenge 21: Full Stack Projects using MERN",
"contest_start_date": "25 Oct 2025 00:00:00",
"contest_end_date": "27 Oct 2025 00:00:00",
"contest_start_date_iso": "2025-10-25T00:00:00+05:30",
"contest_end_date_iso": "2025-10-27T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 8
}
],
"future_contests": [
{
"contest_code": "START210",
"contest_name": "Starters 210",
"contest_start_date": "29 Oct 2025 20:00:00",
"contest_end_date": "29 Oct 2025 22:00:00",
"contest_start_date_iso": "2025-10-29T20:00:00+05:30",
"contest_end_date_iso": "2025-10-29T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 0
},
{
"contest_code": "START211",
"contest_name": "Starters 211",
"contest_start_date": "05 Nov 2025 20:00:00",
"contest_end_date": "05 Nov 2025 22:00:00",
"contest_start_date_iso": "2025-11-05T20:00:00+05:30",
"contest_end_date_iso": "2025-11-05T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 0
}
],
"practice_contests": [],
"past_contests": [
{
"contest_code": "START209",
"contest_name": "Starters 209 (Rated till 5 star)",
"contest_start_date": "22 Oct 2025 20:00:00",
"contest_end_date": "22 Oct 2025 22:00:00",
"contest_start_date_iso": "2025-10-22T20:00:00+05:30",
"contest_end_date_iso": "2025-10-22T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 30408
},
{
"contest_code": "DSAMONDAY08",
"contest_name": "Monday Munch - DSA Challenge 08",
"contest_start_date": "20 Oct 2025 18:00:31",
"contest_end_date": "20 Oct 2025 21:00:31",
"contest_start_date_iso": "2025-10-20T18:00:31+05:30",
"contest_end_date_iso": "2025-10-20T21:00:31+05:30",
"contest_duration": "180",
"distinct_users": 653
},
{
"contest_code": "DEVWEEKEND20",
"contest_name": "Weekend Dev Challenge 20: Full Stack Projects using MERN",
"contest_start_date": "18 Oct 2025 00:00:00",
"contest_end_date": "20 Oct 2025 00:00:00",
"contest_start_date_iso": "2025-10-18T00:00:00+05:30",
"contest_end_date_iso": "2025-10-20T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 318
},
{
"contest_code": "START208",
"contest_name": "Starters 208 (Rated till 6 star)",
"contest_start_date": "15 Oct 2025 20:00:00",
"contest_end_date": "15 Oct 2025 22:00:00",
"contest_start_date_iso": "2025-10-15T20:00:00+05:30",
"contest_end_date_iso": "2025-10-15T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 37727
},
{
"contest_code": "DSAMONDAY07",
"contest_name": "Monday Munch - DSA Challenge 07",
"contest_start_date": "13 Oct 2025 18:00:00",
"contest_end_date": "13 Oct 2025 21:00:00",
"contest_start_date_iso": "2025-10-13T18:00:00+05:30",
"contest_end_date_iso": "2025-10-13T21:00:00+05:30",
"contest_duration": "180",
"distinct_users": 4934
},
{
"contest_code": "DEVWEEKEND19",
"contest_name": "Weekend Dev Challenge 19: Full Stack Projects using MERN",
"contest_start_date": "11 Oct 2025 00:00:00",
"contest_end_date": "13 Oct 2025 00:00:00",
"contest_start_date_iso": "2025-10-11T00:00:00+05:30",
"contest_end_date_iso": "2025-10-13T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 5376
},
{
"contest_code": "START207",
"contest_name": "Starters 207 (Rated till 5 star)",
"contest_start_date": "08 Oct 2025 20:00:00",
"contest_end_date": "08 Oct 2025 22:00:00",
"contest_start_date_iso": "2025-10-08T20:00:00+05:30",
"contest_end_date_iso": "2025-10-08T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 32785
},
{
"contest_code": "DSAMONDAY06",
"contest_name": "Monday Munch - DSA Challenge 06",
"contest_start_date": "06 Oct 2025 18:00:02",
"contest_end_date": "06 Oct 2025 21:00:02",
"contest_start_date_iso": "2025-10-06T18:00:02+05:30",
"contest_end_date_iso": "2025-10-06T21:00:02+05:30",
"contest_duration": "180",
"distinct_users": 892
},
{
"contest_code": "DEVWEEKEND18",
"contest_name": "Weekend Dev Challenge 18: Full Stack Projects using MERN",
"contest_start_date": "04 Oct 2025 00:00:00",
"contest_end_date": "06 Oct 2025 00:00:00",
"contest_start_date_iso": "2025-10-04T00:00:00+05:30",
"contest_end_date_iso": "2025-10-06T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 223
},
{
"contest_code": "START206",
"contest_name": "Starters 206 (Rated till 5 star)",
"contest_start_date": "01 Oct 2025 20:00:00",
"contest_end_date": "01 Oct 2025 22:00:00",
"contest_start_date_iso": "2025-10-01T20:00:00+05:30",
"contest_end_date_iso": "2025-10-01T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 23977
},
{
"contest_code": "DSAMONDAY05",
"contest_name": "Monday Munch - DSA Challenge 05",
"contest_start_date": "29 Sep 2025 18:00:00",
"contest_end_date": "29 Sep 2025 21:00:00",
"contest_start_date_iso": "2025-09-29T18:00:00+05:30",
"contest_end_date_iso": "2025-09-29T21:00:00+05:30",
"contest_duration": "180",
"distinct_users": 1160
},
{
"contest_code": "DEVWEEKEND17",
"contest_name": "Weekend Dev Challenge 17: GenAI Projects using LLM",
"contest_start_date": "27 Sep 2025 00:00:00",
"contest_end_date": "29 Sep 2025 00:00:00",
"contest_start_date_iso": "2025-09-27T00:00:00+05:30",
"contest_end_date_iso": "2025-09-29T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 130
},
{
"contest_code": "START205",
"contest_name": "Starters 205 (Rated till 6 star)",
"contest_start_date": "24 Sep 2025 20:00:00",
"contest_end_date": "24 Sep 2025 22:00:00",
"contest_start_date_iso": "2025-09-24T20:00:00+05:30",
"contest_end_date_iso": "2025-09-24T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 32552
},
{
"contest_code": "DSAMONDAY04",
"contest_name": "Monday Munch - DSA Challenge 04",
"contest_start_date": "22 Sep 2025 18:00:00",
"contest_end_date": "22 Sep 2025 21:00:00",
"contest_start_date_iso": "2025-09-22T18:00:00+05:30",
"contest_end_date_iso": "2025-09-22T21:00:00+05:30",
"contest_duration": "180",
"distinct_users": 759
},
{
"contest_code": "DEVWEEKEND16",
"contest_name": "Weekend Dev Challenge 16: GenAI Projects using LLM",
"contest_start_date": "20 Sep 2025 00:00:00",
"contest_end_date": "22 Sep 2025 00:00:00",
"contest_start_date_iso": "2025-09-20T00:00:00+05:30",
"contest_end_date_iso": "2025-09-22T00:00:00+05:30",
"contest_duration": "2880",
"distinct_users": 171
},
{
"contest_code": "START204",
"contest_name": "Starters 204 (Rated till 5 star)",
"contest_start_date": "17 Sep 2025 20:00:00",
"contest_end_date": "17 Sep 2025 22:00:00",
"contest_start_date_iso": "2025-09-17T20:00:00+05:30",
"contest_end_date_iso": "2025-09-17T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 36282
},
{
"contest_code": "DSAMONDAY03",
"contest_name": "Monday Munch - DSA Challenge 03",
"contest_start_date": "15 Sep 2025 18:00:00",
"contest_end_date": "15 Sep 2025 21:00:00",
"contest_start_date_iso": "2025-09-15T18:00:00+05:30",
"contest_end_date_iso": "2025-09-15T21:00:00+05:30",
"contest_duration": "180",
"distinct_users": 657
},
{
"contest_code": "DEVWEEKEND15",
"contest_name": "Weekend Dev Challenge 15: Classify images using Deep Learning",
"contest_start_date": "13 Sep 2025 00:00:00",
"contest_end_date": "14 Sep 2025 00:00:00",
"contest_start_date_iso": "2025-09-13T00:00:00+05:30",
"contest_end_date_iso": "2025-09-14T00:00:00+05:30",
"contest_duration": "1440",
"distinct_users": 112
},
{
"contest_code": "START203",
"contest_name": "Starters 203 (Rated till 5 star)",
"contest_start_date": "10 Sep 2025 20:00:00",
"contest_end_date": "10 Sep 2025 22:00:00",
"contest_start_date_iso": "2025-09-10T20:00:00+05:30",
"contest_end_date_iso": "2025-09-10T22:00:00+05:30",
"contest_duration": "120",
"distinct_users": 36512
},
{
"contest_code": "DSAMONDAY02",
"contest_name": "Monday Munch - DSA Challenge 02",
"contest_start_date": "08 Sep 2025 18:00:00",
"contest_end_date": "08 Sep 2025 21:00:00",
"contest_start_date_iso": "2025-09-08T18:00:00+05:30",
"contest_end_date_iso": "2025-09-08T21:00:00+05:30",
"contest_duration": "180",
"distinct_users": 737
}
],
"skill_tests": [
{
"contest_code": "basic-python",
"contest_name": "Python Online Test & Quiz",
"contest_start_date": "27 Mar 2024 15:00:00",
"contest_end_date": "01 Jan 2027 01:30:00",
"contest_start_date_iso": "2024-03-27T15:00:00+05:30",
"contest_end_date_iso": "2027-01-01T01:30:00+05:30",
"contest_duration": "90",
"problem_count": 30,
"distinct_users": 61244
},
{
"contest_code": "basic-java",
"contest_name": "Java Online Test & Quiz",
"contest_start_date": "28 Mar 2024 00:00:00",
"contest_end_date": "01 Jan 2027 01:30:00",
"contest_start_date_iso": "2024-03-28T00:00:00+05:30",
"contest_end_date_iso": "2027-01-01T01:30:00+05:30",
"contest_duration": "90",
"problem_count": 30,
"distinct_users": 49993
},
{
"contest_code": "basic-c-language",
"contest_name": "C language online test",
"contest_start_date": "28 Mar 2024 00:00:00",
"contest_end_date": "01 Jan 2027 01:30:00",
"contest_start_date_iso": "2024-03-28T00:00:00+05:30",
"contest_end_date_iso": "2027-01-01T01:30:00+05:30",
"contest_duration": "90",
"problem_count": 30,
"distinct_users": 41373
},
{
"contest_code": "basic-c-plus-plus",
"contest_name": "C++ Online Test and Quiz",
"contest_start_date": "28 Mar 2024 00:00:00",
"contest_end_date": "01 Jan 2027 01:30:00",
"contest_start_date_iso": "2024-03-28T00:00:00+05:30",
"contest_end_date_iso": "2027-01-01T01:30:00+05:30",
"contest_duration": "90",
"problem_count": 30,
"distinct_users": 32507
},
{
"contest_code": "basic-sql",
"contest_name": "SQL Online Test and Quiz",
"contest_start_date": "01 Jun 2024 00:00:00",
"contest_end_date": "01 Jan 2027 01:00:00",
"contest_start_date_iso": "2024-06-01T00:00:00+05:30",
"contest_end_date_iso": "2027-01-01T01:00:00+05:30",
"contest_duration": "60",
"problem_count": 17,
"distinct_users": 17426
},
{
"contest_code": "operating-systems",
"contest_name": "Operating Systems Skill Test",
"contest_start_date": "01 Jun 2024 00:00:00",
"contest_end_date": "01 Jan 2027 00:45:00",
"contest_start_date_iso": "2024-06-01T00:00:00+05:30",
"contest_end_date_iso": "2027-01-01T00:45:00+05:30",
"contest_duration": "45",
"problem_count": 30,
"distinct_users": 8751
},
{
"contest_code": "c-language-dsa",
"contest_name": "Data structures and Algorithms in C test",
"contest_start_date": "01 Apr 2024 12:00:00",
"contest_end_date": "01 Jan 2027 02:00:00",
"contest_start_date_iso": "2024-04-01T12:00:00+05:30",
"contest_end_date_iso": "2027-01-01T02:00:00+05:30",
"contest_duration": "120",
"problem_count": 28,
"distinct_users": 6611
}
],
"banners": [
{
"image": "1760933050.png",
"link": "https:\/\/www.codechef.com\/START209"
},
{
"image": "1719492535.png",
"link": "https:\/\/www.codechef.com\/roadmap\/data-structures-and-algorithms"
}
]
}

View file

@ -27,7 +27,7 @@
<a href="/" class="logo"><img src="/logo.png?1" alt="CSES" /></a>
<a
class="menu-toggle"
onclick="document.body.classList.toggle('menu-open');"
onclick="document.body.classList.toggle('menu-open')"
>
<i class="fas fa-bars"></i>
</a>

View file

@ -27,7 +27,7 @@
<a href="/" class="logo"><img src="/logo.png?1" alt="CSES" /></a>
<a
class="menu-toggle"
onclick="document.body.classList.toggle('menu-open');"
onclick="document.body.classList.toggle('menu-open')"
>
<i class="fas fa-bars"></i>
</a>

View file

@ -27,7 +27,7 @@
<a href="/" class="logo"><img src="/logo.png?1" alt="CSES" /></a>
<a
class="menu-toggle"
onclick="document.body.classList.toggle('menu-open');"
onclick="document.body.classList.toggle('menu-open')"
>
<i class="fas fa-bars"></i>
</a>

View file

@ -6,11 +6,6 @@ from scrapers.models import (
TestsResult,
)
MODEL_FOR_MODE = {
"metadata": MetadataResult,
"contests": ContestListResult,
}
MATRIX = {
"cses": {
"metadata": ("introductory_problems",),
@ -27,6 +22,11 @@ MATRIX = {
"tests": ("1550",),
"contests": tuple(),
},
"codechef": {
"metadata": ("START209D",),
"tests": ("START209D",),
"contests": tuple(),
},
}
@ -38,24 +38,34 @@ def test_scraper_offline_fixture_matrix(run_scraper_offline, scraper, mode):
assert rc in (0, 1), f"Bad exit code {rc}"
assert objs, f"No JSON output for {scraper}:{mode}"
if mode in ("metadata", "contests"):
Model = MODEL_FOR_MODE[mode]
model = Model.model_validate(objs[-1])
assert model is not None
if mode == "metadata":
model = MetadataResult.model_validate(objs[-1])
assert model.success is True
if mode == "metadata":
assert model.url
assert len(model.problems) >= 1
assert all(isinstance(p.id, str) and p.id for p in model.problems)
else:
assert len(model.contests) >= 1
assert model.url
assert len(model.problems) >= 1
assert all(isinstance(p.id, str) and p.id for p in model.problems)
elif mode == "contests":
model = ContestListResult.model_validate(objs[-1])
assert model.success is True
assert len(model.contests) >= 1
else:
assert len(objs) >= 1, "No test objects returned"
validated_any = False
for obj in objs:
if "success" in obj and "tests" in obj and "problem_id" in obj:
tr = TestsResult.model_validate(obj)
assert tr.problem_id != ""
assert isinstance(tr.tests, list)
assert hasattr(tr, "combined"), "Missing combined field"
assert tr.combined is not None, "combined field is None"
assert hasattr(tr.combined, "input"), "combined missing input"
assert hasattr(tr.combined, "expected"), "combined missing expected"
assert isinstance(tr.combined.input, str), "combined.input not string"
assert isinstance(tr.combined.expected, str), (
"combined.expected not string"
)
assert hasattr(tr, "multi_test"), "Missing multi_test field"
assert isinstance(tr.multi_test, bool), "multi_test not boolean"
validated_any = True
else:
assert "problem_id" in obj
@ -63,5 +73,17 @@ def test_scraper_offline_fixture_matrix(run_scraper_offline, scraper, mode):
assert (
"timeout_ms" in obj and "memory_mb" in obj and "interactive" in obj
)
assert "combined" in obj, "Missing combined field in raw JSON"
assert isinstance(obj["combined"], dict), "combined not a dict"
assert "input" in obj["combined"], "combined missing input key"
assert "expected" in obj["combined"], "combined missing expected key"
assert isinstance(obj["combined"]["input"], str), (
"combined.input not string"
)
assert isinstance(obj["combined"]["expected"], str), (
"combined.expected not string"
)
assert "multi_test" in obj, "Missing multi_test field in raw JSON"
assert isinstance(obj["multi_test"], bool), "multi_test not boolean"
validated_any = True
assert validated_any, "No valid tests payloads validated"

1663
uv.lock generated

File diff suppressed because it is too large Load diff