0
0
mirror of https://github.com/go-gitea/gitea.git synced 2024-11-30 14:24:27 +01:00
gitea/models/fixtures/repository.yml
Kyle Evans e461f0854f
[RFC] Make archival asynchronous (#11296)
* Make archival asynchronous

The prime benefit being sought here is for large archives to not
clog up the rendering process and cause unsightly proxy timeouts.
As a secondary benefit, archive-in-progress is moved out of the
way into a /tmp file so that new archival requests for the same
commit will not get fulfilled based on an archive that isn't yet
finished.

This asynchronous system is fairly primitive; request comes in, we'll
spawn off a new goroutine to handle it, then we'll mark it as done.
Status requests will see if the file exists in the final location,
and report the archival as done when it exists.

Fixes #11265

* Archive links: drop initial delay to three-quarters of a second

Some, or perhaps even most, archives will not take all that long to archive.
The archive process starts as soon as the download button is initially
clicked, so in theory they could be done quite quickly.  Drop the initial
delay down to three-quarters of a second to make it more responsive in the
common case of the archive being quickly created.

* archiver: restructure a little bit to facilitate testing

This introduces two sync.Cond pointers to the archiver package. If they're
non-nil when we go to process a request, we'll wait until signalled (at all)
to proceed. The tests will then create the sync.Cond so that it can signal
at-will and sanity-check the state of the queue at different phases.

The author believes that nil-checking these two sync.Cond pointers on every
archive processing will introduce minimal overhead with no impact on
maintainability.

* gofmt nit: no space around binary + operator

* services: archiver: appease golangci-lint, lock queueMutex

Locking/unlocking the queueMutex is allowed, but not required, for
Cond.Signal() and Cond.Broadcast().  The magic at play here is just a little
too much for golangci-lint, as we take the address of queueMutex and this is
mostly used in archiver.go; the variable still gets flagged as unused.

* archiver: tests: fix several timing nits

Once we've signaled a cond var, it may take some small amount of time for
the goroutines released to hit the spot we're wanting them to be at. Give
them an appropriate amount of time.

* archiver: tests: no underscore in var name, ungh

* archiver: tests: Test* is run in a separate context than TestMain

We must setup the mutex/cond variables at the beginning of any test that's
going to use it, or else these will be nil when the test is actually ran.

* archiver: tests: hopefully final tweak

Things got shuffled around such that we carefully build up and release
requests from the queue, so we can validate the state of the queue at each
step. Fix some assertions that no longer hold true as fallout.

* repo: Download: restore some semblance of previous behavior

When archival was made async, the GET endpoint was only useful if a previous
POST had initiated the download. This commit restores the previous behavior,
to an extent; we'll now submit the archive request there and return a
"202 Accepted" to indicate that it's processing if we didn't manage to
complete the request within ~2 seconds of submission.

This lets a client directly GET the archive, and gives them some indication
that they may attempt to GET it again at a later time.

* archiver: tests: simplify a bit further

We don't need to risk failure and use time.ParseDuration to get 2 *
time.Second.

else if isn't really necessary if the conditions are simple enough and lead
to the same result.

* archiver: tests: resolve potential source of flakiness

Increase all timeouts to 10 seconds; these aren't hard-coded sleeps, so
there's no guarantee we'll actually take that long. If we need longer to
not have a false-positive, then so be it.

While here, various assert.{Not,}Equal arguments are flipped around so that
the wording in error output reflects reality, where the expected argument is
second and actual third.

* archiver: setup infrastructure for notifying consumers of completion

This API will *not* allow consumers to subscribe to specific requests being
completed, just *any* request being completed. The caller is responsible for
determining if their request is satisfied and waiting again if needed.

* repo: archive: make GET endpoint synchronous again

If the request isn't complete, this endpoint will now submit the request and
wait for completion using the new API. This may still be susceptible to
timeouts for larger repos, but other endpoints now exist that the web
interface will use to negotiate its way through larger archive processes.

* archiver: tests: amend test to include WaitForCompletion()

This is a trivial one, so go ahead and include it.

* archiver: tests: fix test by calling NewContext()

The mutex is otherwise uninitialized, so we need to ensure that we're
actually initializing it if we plan to test it.

* archiver: tests: integrate new WaitForCompletion a little better

We can use this to wait for archives to come in, rather than spinning and
hoping with a timeout.

* archiver: tests: combine numQueued declaration with next-instruction assignment

* routers: repo: reap unused archiving flag from DownloadStatus()

This had some planned usage before, indicating whether this request
initiated the archival process or not. After several rounds of refactoring,
this use was deemed not necessary for much of anything and got boiled down
to !complete in all cases.

* services: archiver: restructure to use a channel

We now offer two forms of waiting for a request:
- WaitForCompletion: wait for completion with no timeout
- TimedWaitForCompletion: wait for completion with timeout

In both cases, we wait for the given request's cchan to close; in the latter
case, we do so with the caller-provided timeout. This completely removes the
need for busy-wait loops in Download/InitiateDownload, as it's fairly clean
to wait on a channel with timeout.

* services: archiver: use defer to unlock now that we can

This previously carried the lock into the goroutine, but an intermediate
step just added the request to archiveInProgress outside of the new
goroutine and removed the need for the goroutine to start out with it.

* Revert "archiver: tests: combine numQueued declaration with next-instruction assignment"

This reverts commit bcc5214023.

Revert "archiver: tests: integrate new WaitForCompletion a little better"

This reverts commit 9fc8bedb56.

Revert "archiver: tests: fix test by calling NewContext()"

This reverts commit 709c35685e.

Revert "archiver: tests: amend test to include WaitForCompletion()"

This reverts commit 75261f56bc.

* archiver: tests: first attempt at WaitForCompletion() tests

* archiver: tests: slight improvement, less busy-loop

Just wait for the requests to complete in order, instead of busy-waiting
with a timeout.  This is slightly less fragile.

While here, reverse the arguments of a nearby assert.Equal() so that
expected/actual are correct in any test output.

* archiver: address lint nits

* services: archiver: only close the channel once

* services: archiver: use a struct{} for the wait channel

This makes it obvious that the channel is only being used as a signal,
rather than anything useful being piped through it.

* archiver: tests: fix expectations

Move the close of the channel into doArchive() itself; notably, before these
goroutines move on to waiting on the Release cond.

The tests are adjusted to reflect that we can't WaitForCompletion() after
they've already completed, as WaitForCompletion() doesn't indicate that
they've been released from the queue yet.

* archiver: tests: set cchan to nil for comparison

* archiver: move ctx.Error's back into the route handlers

We shouldn't be setting this in a service, we should just be validating the
request that we were handed.

* services: archiver: use regex to match a hash

This makes sure we don't try and use refName as a hash when it's clearly not
one, e.g. heads/pull/foo.

* routers: repo: remove the weird /archive/status endpoint

We don't need to do this anymore, we can just continue POSTing to the
archive/* endpoint until we're told the download's complete. This avoids a
potential naming conflict, where a ref could start with "status/"

* archiver: tests: bump reasonable timeout to 15s

* archiver: tests: actually release timedReq

* archiver: tests: run through inFlight instead of manually checking

While we're here, add a test for manually re-processing an archive that's
already been complete. Re-open the channel and mark it incomplete, so that
doArchive can just mark it complete again.

* initArchiveLinks: prevent default behavior from clicking

* archiver: alias gitea's context, golang context import pending

* archiver: simplify logic, just reconstruct slices

While the previous logic was perhaps slightly more efficient, the
new variant's readability is much improved.

* archiver: don't block shutdown on waiting for archive

The technique established launches a goroutine to do the wait,
which will close a wait channel upon termination. For the timeout
case, we also send back a value indicating whether the timeout was
hit or not.

The timeouts are expected to be relatively small, but still a multi-
second delay to shutdown due to this could be unfortunate.

* archiver: simplify shutdown logic

We can just grab the shutdown channel from the graceful manager instead of
constructing a channel to halt the caller and/or pass a result back.

* Style issues

* Fix mis-merge

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
2020-11-07 22:27:28 +02:00

696 lines
11 KiB
YAML

-
id: 1
owner_id: 2
owner_name: user2
lower_name: repo1
name: repo1
is_empty: false
is_private: false
num_issues: 2
num_closed_issues: 1
num_pulls: 3
num_closed_pulls: 0
num_milestones: 3
num_closed_milestones: 1
num_watches: 4
num_projects: 1
num_closed_projects: 0
status: 0
-
id: 2
owner_id: 2
owner_name: user2
lower_name: repo2
name: repo2
is_private: true
num_issues: 2
num_closed_issues: 1
num_pulls: 0
num_closed_pulls: 0
num_stars: 1
close_issues_via_commit_in_any_branch: true
status: 0
-
id: 3
owner_id: 3
owner_name: user3
lower_name: repo3
name: repo3
is_private: true
num_issues: 1
num_closed_issues: 0
num_pulls: 1
num_closed_pulls: 0
num_watches: 0
num_projects: 1
num_closed_projects: 0
status: 0
-
id: 4
owner_id: 5
owner_name: user5
lower_name: repo4
name: repo4
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_stars: 1
num_projects: 0
num_closed_projects: 1
status: 0
-
id: 5
owner_id: 3
owner_name: user3
lower_name: repo5
name: repo5
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: true
status: 0
-
id: 6
owner_id: 10
owner_name: user10
lower_name: repo6
name: repo6
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 7
owner_id: 10
owner_name: user10
lower_name: repo7
name: repo7
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 8
owner_id: 10
owner_name: user10
lower_name: repo8
name: repo8
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 9
owner_id: 11
owner_name: user11
lower_name: repo9
name: repo9
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 10
owner_id: 12
owner_name: user12
lower_name: repo10
name: repo10
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 1
num_closed_pulls: 0
is_mirror: false
num_forks: 1
status: 0
-
id: 11
fork_id: 10
owner_id: 13
owner_name: user13
lower_name: repo11
name: repo11
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 12
owner_id: 14
owner_name: user14
lower_name: test_repo_12
name: test_repo_12
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 13
owner_id: 14
owner_name: user14
lower_name: test_repo_13
name: test_repo_13
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 14
owner_id: 14
owner_name: user14
lower_name: test_repo_14
name: test_repo_14
description: test_description_14
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
status: 0
-
id: 15
owner_id: 2
owner_name: user2
lower_name: repo15
name: repo15
is_empty: true
status: 0
-
id: 16
owner_id: 2
owner_name: user2
lower_name: repo16
name: repo16
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
status: 0
-
id: 17
owner_id: 15
owner_name: user15
lower_name: big_test_public_1
name: big_test_public_1
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: false
is_fork: false
status: 0
-
id: 18
owner_id: 15
owner_name: user15
lower_name: big_test_public_2
name: big_test_public_2
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 19
owner_id: 15
owner_name: user15
lower_name: big_test_private_1
name: big_test_private_1
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 20
owner_id: 15
owner_name: user15
lower_name: big_test_private_2
name: big_test_private_2
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 21
owner_id: 16
owner_name: user16
lower_name: big_test_public_3
name: big_test_public_3
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 22
owner_id: 16
owner_name: user16
lower_name: big_test_private_3
name: big_test_private_3
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 23
owner_id: 17
owner_name: user17
lower_name: big_test_public_4
name: big_test_public_4
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 24
owner_id: 17
owner_name: user17
lower_name: big_test_private_4
name: big_test_private_4
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: false
status: 0
-
id: 25
owner_id: 20
owner_name: user20
lower_name: big_test_public_mirror_5
name: big_test_public_mirror_5
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: true
is_fork: false
status: 0
-
id: 26
owner_id: 20
owner_name: user20
lower_name: big_test_private_mirror_5
name: big_test_private_mirror_5
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: true
is_fork: false
status: 0
-
id: 27
owner_id: 19
owner_name: user19
lower_name: big_test_public_mirror_6
name: big_test_public_mirror_6
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: true
num_forks: 1
is_fork: false
status: 0
-
id: 28
owner_id: 19
owner_name: user19
lower_name: big_test_private_mirror_6
name: big_test_private_mirror_6
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
num_watches: 0
is_mirror: true
num_forks: 1
is_fork: false
status: 0
-
id: 29
fork_id: 27
owner_id: 20
owner_name: user20
lower_name: big_test_public_fork_7
name: big_test_public_fork_7
is_private: false
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: true
status: 0
-
id: 30
fork_id: 28
owner_id: 20
owner_name: user20
lower_name: big_test_private_fork_7
name: big_test_private_fork_7
is_private: true
num_issues: 0
num_closed_issues: 0
num_pulls: 0
num_closed_pulls: 0
is_mirror: false
is_fork: true
status: 0
-
id: 31
owner_id: 2
owner_name: user2
lower_name: repo20
name: repo20
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 32 # org public repo
owner_id: 3
owner_name: user3
lower_name: repo21
name: repo21
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 33
owner_id: 2
owner_name: user2
lower_name: utf8
name: utf8
is_private: false
status: 0
-
id: 34
owner_id: 21
owner_name: user21
lower_name: golang
name: golang
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 35
owner_id: 21
owner_name: user21
lower_name: graphql
name: graphql
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 36
owner_id: 2
owner_name: user2
lower_name: commits_search_test
name: commits_search_test
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 37
owner_id: 2
owner_name: user2
lower_name: git_hooks_test
name: git_hooks_test
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 38
owner_id: 22
owner_name: limited_org
lower_name: public_repo_on_limited_org
name: public_repo_on_limited_org
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 39
owner_id: 22
owner_name: limited_org
lower_name: private_repo_on_limited_org
name: private_repo_on_limited_org
is_private: true
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 40
owner_id: 23
owner_name: limited_org
lower_name: public_repo_on_private_org
name: public_repo_on_private_org
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 41
owner_id: 23
owner_name: limited_org
lower_name: private_repo_on_private_org
name: private_repo_on_private_org
is_private: true
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
-
id: 42
owner_id: 2
owner_name: user2
lower_name: glob
name: glob
is_private: false
num_stars: 0
num_forks: 0
num_issues: 1
num_milestones: 1
is_mirror: false
-
id: 43
owner_id: 26
owner_name: org26
lower_name: repo26
name: repo26
is_private: true
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 44
owner_id: 27
owner_name: user27
lower_name: template1
name: template1
is_private: false
is_template: true
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 45
owner_id: 27
owner_name: user27
lower_name: template2
name: template2
is_private: false
is_template: true
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 46
owner_id: 26
owner_name: org26
lower_name: repo_external_tracker
name: repo_external_tracker
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 47
owner_id: 26
owner_name: org26
lower_name: repo_external_tracker_numeric
name: repo_external_tracker_numeric
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0
-
id: 48
owner_id: 26
owner_name: org26
lower_name: repo_external_tracker_alpha
name: repo_external_tracker_alpha
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
num_pulls: 1
is_mirror: false
status: 0
-
id: 49
owner_id: 27
owner_name: user27
lower_name: repo49
name: repo49
is_private: false
num_stars: 0
num_forks: 0
num_issues: 0
is_mirror: false
status: 0