Verified Commit cd1bc43a authored by Chocobozzz's avatar Chocobozzz
Browse files

Improve s3 doc

parent 28c9f122
Pipeline #440455 passed with stage
in 16 seconds
# Remote storage (S3)
In PeerTube version 3.4.0 native support was added for storage in s3 compatible
object stores. If you are using this version or newer you can use the new method
for s3 storage, otherwise you can still follow the documentation for the
[old method](?id=old-method).
In PeerTube version 3.4.0 native support was added for storage in S3 compatible
object stores. If you are using this version or newer you can use the new method
for s3 storage, otherwise you can still follow the documentation for the
[old method](admin-remote-storage?id=old-object-storage-method).
## Native object storage
If your object storage provider supports the AWS S3 API, you can configure your
instance to move files there after transcoding. The bucket you configure should
**PeerTube >= 3.4**
If your object storage provider supports the AWS S3 API, you can configure your
instance to move files there after transcoding. The bucket you configure should
be public and have CORS rules to allow traffic from anywhere.
Live videos are still stored on the disk. If replay is enabled, they will be moved
in the object storage after transcoding.
### PeerTube Settings
#### Endpoint and buckets
Here are two examples on how you can configure your instance:
```yaml
# Store all videos in one bucket on Backblaze b2
object_storage:
enabled: true
# Example Backblaze b2 endpoint
endpoint: 's3.us-west-001.backblazeb2.com'
videos:
bucket_name: 'peertube-videos'
prefix: 'videos/'
# Use the same bucket as for webtorrent videos but with a different prefix
streaming_playlists:
bucket_name: 'peertube-videos'
......@@ -33,8 +44,8 @@ object_storage:
# Use two different buckets for webtorrent and HLS videos on AWS S3
object_storage:
enabled: true
# Example AWS endpoint in the us-east-1 region
# The region specific endpoint is required for correct url generation
endpoint: 's3.us-east-1.amazonaws.com'
# Needs to be set to the bucket region when using AWS S3
region: 'us-east-1'
......@@ -42,11 +53,15 @@ object_storage:
videos:
bucket_name: 'webtorrent-videos'
prefix: ''
streaming_playlists:
bucket_name: 'hls-videos'
prefix: ''
```
#### Credentials
You will also need to supply credentials to the S3 client. The official AWS
S3 library is used in PeerTube, which supports multiple [credential loading methods](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html).
......@@ -55,6 +70,14 @@ credentials from the environment or a `~/.aws/credentials` file. When loading
from the environment, the usual `AWS_ACCESS_KEY_ID` and `AWS_ACCESS_KEY_ID`
variables are used.
#### Cache server
To reduce object storage cost, we strongly recommend to setup a cache server (CDN/external proxy).
Set your mirror/CDN URL in `object_storage.{streaming_playlists,videos}.base_url` and PeerTube will replace
the object storage host by this base URL on the fly (so you can easily change the `base_url` configuration).
#### Max upload part
If you have trouble with uploads to object storing failing, you can try lowering
the part size. `object_storage.max_upload_part` is set to `2GB` by default, you can
try experimenting with this value to optimize uploading. Multiple uploads can happen
......@@ -65,11 +88,11 @@ in parallel, but for one video the parts are uploaded sequentially.
Because the browser will load the objects from object storage from a different URL than
the local PeerTube instance, cross-origin resource sharing rules apply.
You can solve this either by loading the objects through some kind of caching CDN
that you give access and setting `object_storage.{streaming_playlists,videos}.base_url`
to that caching server, or by allowing access from all origins.
You can solve this either by loading the objects through some kind of caching CDN
that you give access and setting `object_storage.{streaming_playlists,videos}.base_url`
to that caching server, or by allowing access from all origins.
Allowing access from all origins on AWS S3 can be done in the permissions
Allowing access from all origins on AWS S3 can be done in the permissions
tab of your bucket settings. You can set the policy to this for example:
```json
......@@ -127,10 +150,18 @@ b2 update-bucket --corsRules '[
]' bucketname allPublic
```
## Old method
### Object storage migration
PeerTube stores object URLs in the database, so even if you change the object storage configuration
it will serve previously uploaded videos using the old object storage endpoint while serving new uploads using
the new object storage endpoint.
File URLs migration in PeerTube is not provided yet.
## Old object storage method
PeerTube supports streaming directly from an s3 public bucket. The integration
is done via FUSE, for instance with [s3fs](https://github.com/s3fs-fuse/s3fs-fuse).
Object storage integration is done via FUSE, for instance with [s3fs](https://github.com/s3fs-fuse/s3fs-fuse).
```bash
export S3_STORAGE=/var/www/peertube/s3-storage
......
......@@ -50,6 +50,7 @@
<script src="/vendor/prism-bash.min.js"></script>
<script src="/vendor/prism-json.min.js"></script>
<script src="/vendor/prism-typescript.min.js"></script>
<script src="/vendor/prism-yaml.min.js"></script>
<script src="/vendor/prism-css.min.js"></script>
<script>
......
......@@ -12,6 +12,7 @@ cp node_modules/prismjs/themes/prism.css vendor/
cp node_modules/prismjs/components/prism-bash.min.js vendor/
cp node_modules/prismjs/components/prism-json.min.js vendor/
cp node_modules/prismjs/components/prism-typescript.min.js vendor/
cp node_modules/prismjs/components/prism-yaml.min.js vendor/
cp node_modules/prismjs/components/prism-css.min.js vendor/
cp node_modules/feather-icons/dist/feather.min.js vendor/
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment