API documentation
BlurHashPipeline
Calculate the BlurHashes of the downloaded images.
Source code in src/scrapy_extensions/pipelines.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
from_crawler(crawler: Crawler) -> BlurHashPipeline
classmethod
Init from crawler.
Source code in src/scrapy_extensions/pipelines.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | |
process_image_obj(image_obj: dict[str, Any], x_components: int = 4, y_components: int = 4) -> dict[str, Any]
Calculate the BlurHash of a given image.
Source code in src/scrapy_extensions/pipelines.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | |
process_item(item: Any, spider: Spider) -> Any
Calculate the BlurHashes of the downloaded images.
Source code in src/scrapy_extensions/pipelines.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
DelayedRetryMiddleware
Bases: RetryMiddleware
retry requests with a delay (async/await version)
Notes
- Uses
asyncio.sleepto implement the delay. process_responseis an async coroutine; Scrapy accepts coroutines from middleware methods and will await them appropriately when using an asyncio-compatible reactor.- Behaviour and configuration keys are kept compatible with the original implementation.
Source code in src/scrapy_extensions/downloadermiddlewares.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
process_response(request: Request, response: Response, spider: Spider) -> Request | Response
async
retry certain requests with delay
This method is now a coroutine. If the response status matches a delayed-retry code, we await the computed delay and then return the retry Request (or None, in which case the original response is returned). Otherwise we delegate to the parent implementation.
Source code in src/scrapy_extensions/downloadermiddlewares.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | |
LoopingExtension
Run a task in a loop.
Source code in src/scrapy_extensions/extensions.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | |
setup_looping_task(task: Callable[..., object], crawler: Crawler, interval: float) -> None
Setup task to run periodically at a given interval.
Source code in src/scrapy_extensions/extensions.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |
NicerAutoThrottle
Bases: AutoThrottle
Autothrottling with exponential backoff depending on status codes.
Source code in src/scrapy_extensions/extensions.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |
QuietLogFormatter
Bases: LogFormatter
Be quieter about scraped items.
Source code in src/scrapy_extensions/loggers.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 | |
downloadermiddlewares
Scrapy downloader middleware (async/await rewrite)
This middleware preserves the same behaviour as the original Deferred-based
implementation but uses Python coroutines (async/await) and
asyncio.sleep for the delay. The public behaviour (delayed retries,
backoff, priority adjust, config keys) is unchanged.
DelayedRetryMiddleware
Bases: RetryMiddleware
retry requests with a delay (async/await version)
Notes
- Uses
asyncio.sleepto implement the delay. process_responseis an async coroutine; Scrapy accepts coroutines from middleware methods and will await them appropriately when using an asyncio-compatible reactor.- Behaviour and configuration keys are kept compatible with the original implementation.
Source code in src/scrapy_extensions/downloadermiddlewares.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
process_response(request: Request, response: Response, spider: Spider) -> Request | Response
async
retry certain requests with delay
This method is now a coroutine. If the response status matches a delayed-retry code, we await the computed delay and then return the retry Request (or None, in which case the original response is returned). Otherwise we delegate to the parent implementation.
Source code in src/scrapy_extensions/downloadermiddlewares.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | |
extensions
Extensions.
LoopingExtension
Run a task in a loop.
Source code in src/scrapy_extensions/extensions.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | |
setup_looping_task(task: Callable[..., object], crawler: Crawler, interval: float) -> None
Setup task to run periodically at a given interval.
Source code in src/scrapy_extensions/extensions.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |
NicerAutoThrottle
Bases: AutoThrottle
Autothrottling with exponential backoff depending on status codes.
Source code in src/scrapy_extensions/extensions.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |
loggers
Logging classes.
QuietLogFormatter
Bases: LogFormatter
Be quieter about scraped items.
Source code in src/scrapy_extensions/loggers.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 | |
pipelines
Scrapy item pipelines
BlurHashPipeline
Calculate the BlurHashes of the downloaded images.
Source code in src/scrapy_extensions/pipelines.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
from_crawler(crawler: Crawler) -> BlurHashPipeline
classmethod
Init from crawler.
Source code in src/scrapy_extensions/pipelines.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | |
process_image_obj(image_obj: dict[str, Any], x_components: int = 4, y_components: int = 4) -> dict[str, Any]
Calculate the BlurHash of a given image.
Source code in src/scrapy_extensions/pipelines.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | |
process_item(item: Any, spider: Spider) -> Any
Calculate the BlurHashes of the downloaded images.
Source code in src/scrapy_extensions/pipelines.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
utils
Utility functions.
calculate_blurhash(image: str | Path | PIL.Image.Image, x_components: int = 4, y_components: int = 4) -> str
Calculate the blurhash of a given image.
Source code in src/scrapy_extensions/utils.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | |