Initial Version

This commit is contained in:
defparam 2024-07-15 16:47:36 -04:00
commit a4abc81aaa
41 changed files with 2016 additions and 0 deletions

9
.gitignore vendored Normal file
View File

@ -0,0 +1,9 @@
.venv
.aws-sam
app/tools/*
__pycache__
lemmacli.egg-info
samconfig.toml
template.yaml
*.log
app/tool_requirements.txt

33
Dockerfile Normal file
View File

@ -0,0 +1,33 @@
# Use the official Python image from the Docker Hub
FROM python:3.12-slim
# Set environment variables to avoid interactive prompts during package installation
ENV DEBIAN_FRONTEND=noninteractive
# Install dependencies
RUN apt-get update && \
apt-get install -y \
curl \
unzip \
wget \
git \
&& rm -rf /var/lib/apt/lists/*
# Install AWS CLI v2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip aws
# Install AWS SAM CLI
RUN curl -sSL https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip -o sam-cli.zip && \
unzip sam-cli.zip -d sam-installation && \
./sam-installation/install && \
rm -rf sam-cli.zip sam-installation
# Verify installations
RUN aws --version && \
sam --version && \
python --version
WORKDIR /lambda

13
LICENSE Normal file
View File

@ -0,0 +1,13 @@
Copyright 2024 Evan Custodio
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

277
README.md Normal file
View File

@ -0,0 +1,277 @@
<h1 align="center">
<img src="images/lemma.png" alt="lemma" width="400px">
<br>
</h1>
<p align="center">
<a href="https://opensource.org/license/apache-2-0"><img src="https://img.shields.io/badge/license-Apache_2.0-_red.svg"></a>
<a href="https://github.com/defparam/lemma/issues"><img src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat"></a>
<a href="https://twitter.com/defparam"><img src="https://img.shields.io/twitter/follow/defparam.svg?logo=twitter"></a>
</p>
<p align="center">
<a href="#demo">Demo</a>
<a href="#features">Features</a>
<a href="#installation">Installation</a>
<a href="#lemma-web-client">Web Client</a>
<a href="#lemma-terminal-client">Terminal Client</a>
<a href="#FAQ">FAQ</a>
<a href="#Examples">Examples</a>
</p>
### Disclaimer
The author of this project is not responsible for any damage or data loss incurred as a result of using this software. Use this software at your own risk. While efforts have been made to ensure the accuracy and reliability of the software, it is provided "as is" without warranty of any kind. By using this software, you agree to assume all risks associated with its use. Opinions are that of the author and not that of AWS. Review the [AWS pentesting policy](https://aws.amazon.com/security/penetration-testing/) prior to executing any security tools on AWS Lambda.
# Lemma
Lemma is a Python-based AWS Lambda package and client designed to execute packaged command-line tools in a scalable, remote environment on AWS Lambda. Lemma takes advantage of the new [Response Streaming](https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/) feature on AWS Lambda to stream real-time stdout back to the user as the tool is running. The Lemma project comprises three main components:
1) Lemma Lambda Function Package: This package bundles a collection of command-line Linux tools provided by the user, making them accessible via AWS Lambda. It allows users to execute these tools remotely and scale their executions across multiple lambda instances.
2) Web-CLI: This component provides a web-based terminal interface built with [xterm.js](https://xtermjs.org/), [AWS Lambda Web Adapter](https://github.com/awslabs/aws-lambda-web-adapter) and [FastAPI](https://fastapi.tiangolo.com/), accessible via the Lambda URL. This web UI allows users to execute their command-line tools packaged in the Lambda entirely within their web browser.
3) Terminal-CLI: A python-based command-line interface tool in charge invoking the Lemma Lambda function. This tool facilitates the remote execution of the Lambda-hosted tools from a local environment. It pipes stdin and stdout between local and remote tools, providing the ability to execute and scale cli-based workflows onto lambda and back using pipes.
While the intented use case for Lemma is to run verbose security security tooling on AWS lambda, Lemma can be used for any type of command-line tool you wish to run remotely.
# Demo
Web-CLI:
<h1 align="center">
<img src="images/demo.gif">
<br>
</h1>
Terminal-CLI:
<h1 align="center">
<img src="images/demo2.gif">
<br>
</h1>
# Features
- Supports both a Web-CLI and a Terminal-CLI
- Quick and easy build script
- Support for adding your own custom tools
- Support for x86_64 and ARM64 lambda types
- Support for choosing memory, region and timeout
- Flexible terminal piping support
# Installation
## Requirements for Lemma Lambda
1) An AWS account
2) AWS access credentials with permissions to execute cloudformation templates
3) Docker, python3 with pip
## Lambda Build and Deploy Steps
Steps to build and deploy on a fresh Ubuntu 22 instance
1) `sudo apt update`
2) `sudo apt install docker.io python3 python3-pip`
3) `git clone https://github.com/defparam/lemma`
4) `cd lemma`
5) `export AWS_ACCESS_KEY_ID=<your access key id>`
6) `export AWS_SECRET_ACCESS_KEY=<your secret access key>`
7) `./build.sh`
8) Fill out all the questions
9) Copy the lambda URL with the key
Web-CLI:
1) Open chrome and simply browse to your lambda URL w/key
Terminal-CLI:
1) While in the lemma directory: `pip3 install .`
2) Invoke: `lemma`
3) When asked about the lambda URL, paste it into the prompt. This URL will be saved at `~/.lemma/lemma.ini`
Build Walkthrough:
<h1 align="center">
<img src="images/build.gif">
<br>
</h1>
## Lemma Web Client
Lemma's web client is packaged inside the Lemma function itself for easy access. It simply is just 1 html file and 1 javascript file (Also importing xterm.js from CDN). To access it simply just copy and paste your lemma lambda url/key into your chrome web browser and hit enter. For usage details just type the `help` command.
## Lemma Terminal Client
### Usage
```
positional arguments:
remote_command lemma <options> -- remote_command
options:
-h, --help show this help message and exit
-w WORKERS, --workers WORKERS
Number of concurrent Lambda service workers
-l, --lambda-url Prompt user to enter a new lambda url
-i INVOCATIONS, --invocations INVOCATIONS
The number of invocations of the remote command
-p, --per-stdin Invoke the remote command for each line of stdin (-i is ignored)
-d DIV_STDIN, --div-stdin DIV_STDIN
Divide stdin into DIV_STDIN parts at a newline boundary and invoke on each (-i is ignored)
-o, --omit-stdin Omit stdin to the remote command stdin
-e, --no-stderr prevent stderr from being streamed into response
-b, --line-buffered Stream only line chunks to stdout
-v, --verbose Enable verbose remote output
-t, --tools List available tools
```
| Remote Command Macro | Description
|-------------------------|----------------------------------------------------
| `%INDEX%` | You can place this macro on a remote command to insert the current invocation count into the command (starts at 0)
| `%STDIN%` | You can place this macro on a remote command to insert any data that may exist on lemma client's stdin. (Warning: new line characters aren't permitted except in -p mode)
## FAQ
Q: Why did you make this? Aren't there other frameworks?
A: I recently read about a new lambda feature [Response Streaming](https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/) that was only a year old and thought about how wonderful it would be in linux to pipe lambda streams together with security tooling because prior to that all responses back from lambda were buffered. Futhermore, I saw lambda's web adapter and though it would be a super neat feature to have lambda present a web page of a terminal to invoke these streaming commands.
Q: Does this work on MacOS or Windows?
A: In theory yes, but at this point i've only tested linux.
Q: Do you support other cloud providers?
A: No, mainly because I'm not sure if other cloud providers even support response streaming with their FaaS product and secondly I don't have the time to research it and make this tool generic.
Q: How do I package my own tools?
A: If you have a normal bash script, simply move it into the `./tools` directory, make it executable and re-build your lambda, its that easy. If your tool installation requires more advanced setup then place those steps into `./tools/install_tools.sh` and re-build your lambda. NOTE: inside a lambda the only writable directory is `/tmp`, so if your tool needs a mutable settings area create a wrapper script to manage it at `/tmp`
Q: Why do you support both arm64 and x86_64?
A: If you end up running A LOT of executions to the point where you care about your AWS bill you may want to use arm64 architecture since it is generally billed cheaper than x86_64. Also billing rates are slightly different depending on the region and memory requirements as well.
Q: Where do I get my `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` ?
A: You can generate it when you log into your AWS account in IAM. I won't go into how to do this since there are plenty of resources you can google.
Q: How come I can only run 10 parallel lambdas at a time?
A: This is a quota set for all new AWS accounts. To increase it to 100-1000 you have to place a quota increase request into AWS through your account.
Q: What's the deal with the `key` parameter on the Lambda URLs?
A: So this lambda application basically provides an RCE API. These lambda URLs are publically accessible. There is a IAM AUTH mode where you can sign your requests with SigV4 but I haven't implemented it yet. As a band-aid solution I added a poor-man's randomly generated API key. When you access it with the key it sets it as a cookie and redirects you back to the root page. If the key is not correct the lambda will return 404. In general I recommend only keeping your lambda url public at times of use then disable/delete it.
Q: Does lambda support streaming data into stdin of a function?
A: No, not at this time. When the client invokes a function all stdin is known and transmitted on invoke. Only stdout of a lambda function supports streaming.
Q: I have a tool/workload that requires more than 15 minutes, how can i increase the lambda timeout?
A: You can't. Lambda is strict on the timeout being max 15 minutes. The way to solve this problem is to break your workflow down to partial executions that execute under 15 minutes.
Q: On the lemma python client what's the deal with these `-i`, `-p` and `-d` modes?
A: These modes are all mutually exclusive and if none of them are specified it is assumed the mode is `-i 1`. The `-i` mode flag allows you to explicitly specify the number of lambda executions of the remote command. The `-p` mode flag tells the client that for all stdin data presented to the client, perform a lambda execution for each "line" of stdin and place that line as stdin into the lambda function unless the user wants it withheld via the `-o` flag. The `-d` mode flag tells the client to consume all stdin at once, split that stdin in parts of `DIV_STDIN` (at a line boundary) and invoke the function for each part. The `-d` mode will not invoke lambda functions until all stdin data is recieved. The `-p` mode will start invoking lambda functions the moment a line is recieve on stdin.
Q: What are remote command macros?
A: Remote command macros are a way for the client to insert data into your remote command. There are currently only 2 macros supported: `%INDEX%` and `%STDIN%`. `%INDEX%` allows you do put the invocation index into an argument of the remote command. This is useful for cases where you need to provide some differentiator between function executions. `%STDIN%` allows you to place what is it stdin onto the remote command. This is useful especially in `-p` mode where you may want to take a line from stdin and apply it as an argument of the remote command.
Q: What's the reason for `-o`, `-b` and `-e` ?
A: `-o` is used when you want to use stdin data but you do not want to send any stdin data to the remote function stdin. This is useful for cases when you want to use `-p` mode and the `%STDIN%` macro so that each line of stdin is only used as a remote command argument. `-b` buffered line mode forces every thread worker to write responses to stdout in line chunks. This is used to prevent data from 2 or more different thread workers from writing to the same line. This is default off because non-buffered byte streaming is important for animated cli tools. Lastly, `-e` tells the remote function to only respond back with stdout, by default both stderr and stdout are sent back in the response stream.
Q: My tool runs super slow, what gives?
A: It's likely that the memory configuration you set for your lambda is too small for the tool you are trying to run. I ran into this issue with `dirsearch` which led me to ultimately remove it from the tools since it wouldn't reliably run without >1GB of memory and ffuf runs beautifully with 256MB of memory. I'm a big proponent of running go-based tools on lambda since it appears to consume much less memory than python.
Q: You are missing tool X, Are you accepting PRs?
A: Yes I know, I haven't spent time curating a good collection of default tools I've been mostly focused on lemma itself. I'm happy to review PRs for tool additions into `./tools/install_tools.sh` if they make sense.
Q: Is this expensive?
A: It really depends on your usage patterns, but if you are casually invoking tools interactively it is amazingly cheap or most likely free. However, use a pricing calculator to get an idea before doing any heavy workloads. There is no lambda executing while you are idling in the Web-CLI. You invoke a lambda each for loading the root html, lemma-term.js and tools.json but after that the only time lambda is invoked is when you run a tool or refresh/open the page again.
Q: I'm scared, what is `./build.sh` doing with my creds?
A: `./build.sh` is running AWS SAM (Serverless Application Model) on a single template, this creates and deploys a cloudformation template that essentially just synthesizes 1 lambda function and a couple of IAM permissions. It is very light weight and you can review the templates used in the `./templates` directory. Also you can review `./build.log` and `./deploy.log` to get an idea of what is going on. In general feel free to review all the code to get a sense of the project. The largest risk is with `./tools/install_tools.sh` blindly pulling in 3rd party software, so feel free to modify that if you are concerned with supply chain risks.
Q: My deploy is failing and the deploy.log says `'MemorySize' value failed to satisfy constraint: Member must have value less than or equal to 3008`. How come?
A: AWS Lambda can technically support memory sizes from 128 to 10240 but the quota limits you to 3008 max. Place a quota increase request into AWS if you need the expanded memory support.
Q: I have a deployed lemma lambda and I'm trying to delete it but `./build.sh` is failing, what do I do?
A: If the delete fails you can always log onto your AWS account in browser, go to the region you deployed into and head over to: `CloudFormation -> Stacks`, click into the `lemma` stack and click `delete`
## Examples
**Example 1:** A 1 worker example showing 10 invocations using STDIN and INDEX macros:
`echo cat | lemma -i 10 -w 1 -b -- demo Invoke - %STDIN% %INDEX%`
<img src="images/e1.png">
**Example 2:** A 10 worker example showing 10 invocations using STDIN and INDEX macros:
`echo cat | lemma -i 10 -w 10 -b -- demo Invoke - %STDIN% %INDEX%`
<img src="images/e2.png">
**Example 3:** A 5 worker example with an invocation per stdin line where the line is used as an argument
`printf "aaa\nbbb\nccc\nddd\neee" | lemma -p -b -w 5 -- demo test:%STDIN%`
<img src="images/e3.png">
**Example 4:** 2 workers running Subfinder pulling domains for hackerone.com and bugcrowd.com whose output goes directly to 10 workers actively checking if https:443 is open on all domains found from subfinder
`printf "hackerone.com\nbugcrowd.com" | lemma -p -w 2 -b -e -- subfinder | lemma -d 10 -w 10 -b -e -- httpx -p https:443`
<img src="images/e4.png">
**Example 5:** Using TamperMonkey scripts to add context menus that forward commands to lemma Web-CLI
Here are some example TamperMonkey scripts:
```js
// ==UserScript==
// @name lemma: ffuf current host
// @namespace http://tampermonkey.net/
// @description Context menu to execute UserScript
// @version 0.1
// @author author
// @include *
// @grant GM_openInTab
// @run-at context-menu
// ==/UserScript==
(function() {
'use strict';
let lemmaurl = "https://votk3e7gxhxxxylrv2hd4ng7fa0hmarz.lambda-url.us-east-1.on.aws/?cmd=";
let ffufcmd = "run ffuf -w ./tools/wordlists/common.txt -u " + window.location.origin + "/FUZZ -mc 200";
lemmaurl = lemmaurl + encodeURIComponent(ffufcmd);
GM_openInTab(lemmaurl, { active: true });
})();
```
```js
// ==UserScript==
// @name lemma: smuggler current host
// @namespace http://tampermonkey.net/
// @description Context menu to execute UserScript
// @version 0.1
// @author author
// @include *
// @grant GM_openInTab
// @run-at context-menu
// ==/UserScript==
(function() {
'use strict';
let lemmaurl = "https://votk3e7gxhxxxylrv2hd4ng7fa0hmarz.lambda-url.us-east-1.on.aws/?cmd=";
let ffufcmd = "run smuggler -u " + window.location.origin;
lemmaurl = lemmaurl + encodeURIComponent(ffufcmd);
GM_openInTab(lemmaurl, { active: true });
})();
```
TamperMonkey/WebCLI Demo:
<img src="images/e5.gif">

0
app/__init__.py Executable file
View File

166
app/main.py Normal file
View File

@ -0,0 +1,166 @@
from fastapi import FastAPI, Query, Request
from fastapi.responses import StreamingResponse, FileResponse, Response
from fastapi.staticfiles import StaticFiles
import asyncio
import os
import json
import urllib.parse
import traceback
import requests
import time
import shlex
app = FastAPI()
# get a list of every file (not directory) in the tools directory
tools = [f for f in os.listdir("tools") if os.path.isfile(os.path.join("tools", f))]
# write it to a json file at the root of the static directory
with open("/tmp/tools.json", "w") as f:
json.dump(tools, f)
def access_allowed(request):
key = os.getenv("LEMMA_API_KEY")
ckey = request.cookies.get("LEMMA_API_KEY")
if ckey and ckey == key:
return None
if key and request.query_params.get("key") == key:
# return redirect to '/' with a cookie set
r = Response(status_code=302, headers={"Location": "/"})
# set the cookie as secure and httponly
r.set_cookie("LEMMA_API_KEY", key, secure=True, httponly=True)
return r
return Response(status_code=404)
@app.exception_handler(404)
async def custom_404_handler(request, exc):
# Customize the response here
return Response(status_code=404)
@app.exception_handler(405)
async def custom_405_handler(request, exc):
# Customize the response here
return Response(status_code=404)
# Mount the tools.json file to /static/tools.json
@app.get("/tools.json")
async def get_tools(request: Request):
response = access_allowed(request)
if response is not None:
return response
return FileResponse("/tmp/tools.json", media_type="application/json", headers={"Cache-Control": "no-store"})
@app.get("/static/lemma-term.js")
async def read_js(request: Request):
response = access_allowed(request)
if response is not None:
return response
return FileResponse("static/lemma-term.js", media_type="application/javascript", headers={"Cache-Control": "no-store"})
@app.get("/")
async def read_root(request: Request):
response = access_allowed(request)
if response is not None:
return response
return FileResponse("static/index.html", media_type="text/html", headers={"Cache-Control": "no-store"})
async def execute(command, stdinput=None, verbose=False, no_stderr=False):
global g_runningprocess
global g_timeout
global g_req_context
global g_lam_context
timeout = int(os.getenv("LEMMA_TIMEOUT", 60)) - 5 # subtract 5 seconds to allow for cleanup
time_start = time.time()
if g_req_context is not None:
# we are running on AWS Lambda
if verbose:
r = json.loads(g_req_context)
yield bytes(f"\x1b[32mLambda Request ID: \u001b[38;2;145;231;255m{r['requestId']}\x1b[0m\n", "utf-8")
url = "http://checkip.amazonaws.com/"
pubipv4 = requests.get(url).text.strip()
yield bytes(f"\x1b[32mLambda Public IPv4: \u001b[38;2;145;231;255m{pubipv4}\x1b[0m\n", "utf-8")
try:
if verbose:
yield bytes(f"\x1b[32mLambda Command: \u001b[38;2;145;231;255m", "utf-8") + bytes(str(shlex.split(command)), "utf-8") + b"\x1b[0m\n\n"
process = await asyncio.create_subprocess_exec(
*shlex.split(command),
stdin=asyncio.subprocess.PIPE if stdinput else None,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE if no_stderr else asyncio.subprocess.STDOUT
)
except FileNotFoundError:
if verbose:
yield b"\n\x1b[31mRemote Error:\x1b[0m command not found\n"
yield b"\r\n"
return
except:
# yield back the traceback if the command failed to execute
if verbose:
yield traceback.format_exc().encode()
yield b"\r\n"
return
# If input_data is provided, write it to the process's stdin
if stdinput:
process.stdin.write(stdinput)
await process.stdin.drain()
process.stdin.close()
# Read and yield stdout data
while True:
try:
data = await asyncio.wait_for(process.stdout.read(4096), timeout=1)
except asyncio.exceptions.TimeoutError:
if (time.time() - time_start) > timeout:
process.kill()
if verbose:
yield b"\n\x1b[31mRemote Error:\x1b[0m lambda function timed out (Lemma Timeout: %d seconds)\n"%(timeout)
yield b"\r\n"
return
continue
if data:
yield data
else:
break
await process.wait()
if verbose:
yield b"\n\x1b[32mRemote Command Finished \x1b[38;2;145;231;255m- Elapsed Time: " + str(round(time.time() - time_start)).encode() + b" seconds\x1b[0m\n"
@app.post("/runtool")
async def tool(
request: Request,
cmd = Query(""),
verbose = Query("false"),
no_stderr = Query("false")
):
response = access_allowed(request)
if response is not None:
return response
verbose = True if verbose.lower() == "true" else False
no_stderr = True if no_stderr.lower() == "true" else False
global g_req_context
global g_lam_context
g_req_context = request.headers.get('x-amzn-request-context')
g_lam_context = request.headers.get('x-amzn-lambda-context')
stdinput = await request.body()
cmd = urllib.parse.unquote(cmd).strip()
# check if the command is in the tools directory
if cmd.split()[0] not in tools:
return Response(status_code=200, content="\x1b[31mError:\x1b[0m Command not found\n".encode())
cmd = "./tools/" + cmd
headers = {
"X-Lemma-Timeout": os.getenv("LEMMA_TIMEOUT", "60")
}
return StreamingResponse(execute(cmd, stdinput, verbose, no_stderr), media_type="text/html", headers=headers)

16
app/requirements.txt Normal file
View File

@ -0,0 +1,16 @@
annotated-types==0.6.0
anyio==4.2.0
click==8.1.7
exceptiongroup==1.2.0
fastapi==0.109.2
h11==0.14.0
idna==3.7
pydantic==2.6.1
pydantic_core==2.16.2
sniffio==1.3.0
starlette==0.36.3
typing_extensions==4.9.0
uvicorn==0.27.0.post1
setuptools
requests
-r tool_requirements.txt

5
app/run.sh Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
PATH=$PATH:$LAMBDA_TASK_ROOT/bin \
PYTHONPATH=$PYTHONPATH:/opt/python:$LAMBDA_RUNTIME_DIR \
exec python -m uvicorn --port=$PORT main:app

44
app/static/index.html Normal file

File diff suppressed because one or more lines are too long

416
app/static/lemma-term.js Executable file
View File

@ -0,0 +1,416 @@
let disableInput = false;
let prompt = '\x1b[33mlemma$\x1b[0m ';
let commandBuffer = '';
let toollist = [];
let lineBuffer = [];
let history = [];
let historyIndex = -1;
let offset = 0;
function stripAnsiCodes(str) {
// Regular expression to match ANSI escape codes
const ansiRegex = /\x1b\[[0-9;]*m/g;
return str.replace(ansiRegex, '');
}
async function load_tools()
{
try {
const response = await fetch('/tools.json');
if (!response.ok) {
throw new Error('Network response was not ok');
}
const tools = await response.json();
return tools;
}
catch (error) {
console.error('Error fetching tools:', error);
}
return [];
}
function populate_tools() {
load_tools().then((tools) => {
toollist = [];
tools.forEach(tool => {
toollist.push(tool);
});
});
}
function list_tools(terminal) {
load_tools().then((tools) => {
terminal.write('Available Remote Tools:\r\n');
toollist = [];
tools.forEach(tool => {
toollist.push(tool);
terminal.write(` \u001b[38;2;145;231;255m${tool}\u001b[0m\r\n`);
});
terminal.write("\r\n");
terminal.write("Run using \x1b[32mrun \u001b[38;2;145;231;255m<tool> <args>\x1b[0m or \x1b[32mfork \u001b[38;2;145;231;255m<tool> <args>\x1b[0m or simply \u001b[38;2;145;231;255m<tool> <args>\x1b[0m\r\n");
terminal.write(prompt);
});
}
function typeNextChar(terminal, text, delay) {
return new Promise((resolve) => {
let index = 0;
function step() {
if (index < text.length) {
terminal.write(text[index]);
index++;
setTimeout(step, delay);
} else {
terminal.write('\r\n\r\n');
list_tools(terminal);
disableInput = false;
resolve();
}
}
step();
});
}
async function printWithDelay(terminal, text, delay) {
await typeNextChar(terminal, text, delay);
}
async function intro(terminal)
{
disableInput = true;
const sstr = "\u001b\u005b\u0033\u0033\u006d\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u2584\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2588\u2584\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2588\u2584\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2584\u0020\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2580\u2580\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2588\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2588\u2588\u2588\u2584\u2584\u2584\u2584\u2584\u0020\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u2588\u2588\u2584\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2580\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u2584\u2588\u2580\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u2588\u2588\u2584\u0020\u0020\u2584\u2588\u2580\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u2580\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u2588\u2588\u2588\u2588\u2580\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u2584\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u2580\u2588\u2588\u0020\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u2584\u2584\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u2584\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2588\u0020\u000d\u000a\u0020\u0020\u0020\u0020\u0020\u2580\u2588\u2588\u2588\u2588\u2588\u2588\u2580\u2580\u2580\u0020\u0020\u0020\u0020\u2580\u2588\u2588\u2588\u2588\u2588\u2580\u2580\u2580\u0020\u0020\u0020\u2590\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u0020\u2588\u2588\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u258c\u0020\u0020\u0020\u2588\u2588\u2580\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u0020\u2588\u2588\u2580\u001b\u005b\u0030\u006d\u000d\u000a"
terminal.write(sstr+ "\r\n\r\n");
terminal.write(' ');
const phrase = 'Response Streaming CLI Tools on AWS Lambda';
const delay = 25; // Delay in milliseconds
printWithDelay(terminal, phrase, delay);
}
async function execute_remote_tool(terminal, args) {
const abortController = new AbortController();
try {
const url = new URL('/runtool', window.location.origin);
url.searchParams.set('cmd', encodeURIComponent(args));
url.searchParams.set('verbose', "true");
const response = await fetch(url.toString(), {
method: 'POST',
signal: abortController.signal,
});
if (!response.body) {
throw new Error('ReadableStream not supported.');
}
// Get the timeout from the header and set the timeout
const timeoutHeader = response.headers.get('X-Lemma-Timeout');
const timeout = parseInt(timeoutHeader, 10) * 1000; // Convert to milliseconds
// Set a timeout to abort the request
const timeoutId = setTimeout(() => {
abortController.abort();
terminal.write('\r\n\u001b[31mError: Stream timeout exceeded.\x1b[0m\r\n');
}, timeout);
const reader = response.body.getReader();
const decoder = new TextDecoder('utf-8');
while (true) {
const { done, value } = await reader.read();
if (done) {
clearTimeout(timeoutId); // Clear the timeout if done
break;
}
let chunk = decoder.decode(value, { stream: true });
// replace any \n with \r\n
chunk = chunk.replace(/\n/g, '\r\n');
terminal.write(chunk);
}
terminal.write('\r\n\u001b[38;2;145;231;255mRemote tool execution complete.\x1b[0m\r\n');
disableInput = false;
terminal.write(prompt);
} catch (error) {
if (error.name === 'AbortError') {
terminal.write('\r\n\u001b[31mError: Remote execution failed due to timeout.\x1b[0m\r\n');
} else {
terminal.write(`\r\nError: ${error.message}\r\n`);
}
disableInput = false;
terminal.write(prompt);
}
}
document.addEventListener('DOMContentLoaded', () => {
let truerows = 45;
let truecols = 150;
if (terminalContainer.clientWidth <= 1024) {
truecols = 100;
}
const terminal = new Terminal({
rows: truerows,
cols: truecols
});
const fitAddon = new FitAddon.FitAddon();
terminal.loadAddon(fitAddon);
function fitTerminal() {
const containerWidth = terminalContainer.clientWidth;
const containerHeight = terminalContainer.clientHeight;
const cols = truecols;
const rows = truerows;
const cellWidth = containerWidth / cols;
const cellHeight = containerHeight / rows;
// Set the font size based on the smallest dimension to maintain aspect ratio
const fontSize = Math.min(cellWidth, cellHeight) * 1.6;
console.log(cols)
//
// // Apply the calculated font size to the terminal
// // Apply the calculated font size to the terminal
terminal.options.fontSize = fontSize;
fitAddon.fit();
console.log("fitting terminal")
}
terminal.open(document.getElementById('terminalContainer'));
fitTerminal();
terminal.focus();
// Adjust terminal size when window is resized
window.addEventListener('resize', fitTerminal);
// check if the command query has been set
const urlParams = new URLSearchParams(window.location.search);
const command = urlParams.get('cmd');
if (command) {
populate_tools();
terminal.write(prompt+`${command}\r\n`);
executeCommand(command);
}
else
{
intro(terminal);
}
async function simpleShell(term, data) {
let CurX = term.buffer.active.cursorX;
let CurY = term.buffer.active.cursorY;
let MaxX = term.cols;
let MaxY = term.rows;
if (disableInput === true) {
return;
}
// string splitting is needed to also handle multichar input (eg. from copy)
for (let i = 0; i < data.length; ++i) {
const c = data[i];
if (c === '\r') { // <Enter> was pressed case
offset = 0;
term.write('\r\n');
if (lineBuffer.length) {
// we have something in line buffer, normally a shell does its REPL logic here
// for simplicity - just join characters and exec...
const command = lineBuffer.join('');
lineBuffer.length = 0;
history.push(command);
historyIndex = history.length;
executeCommand(command);
}
else {
term.write(prompt);
}
} else if (c === '\x7F') { // <Backspace> was pressed case
if (lineBuffer.length) {
if (offset === 0) {
if (CurX === 0) {
// go to the previous line end
term.write('\x1b[1A'); // control code: move up one line
term.write('\x1b[' + MaxX + 'C'); // control code: move to the end of the line
}
lineBuffer.pop();
term.write('\b \b');
}
}
} else if (['\x1b[5', '\x1b[6'].includes(data.slice(i, i + 3))) {
// not implemented
i += 3;
} else if (['\x1b[F', '\x1b[H'].includes(data.slice(i, i + 3))) {
if (data.slice(i, i + 3) === '\x1b[H') { // Home key
// not implemented
}
else if (data.slice(i, i + 3) === '\x1b[F') { // End key
// not implemented
}
i += 3;
} else if (['\x1b[A', '\x1b[B', '\x1b[C', '\x1b[D'].includes(data.slice(i, i + 3))) { // <arrow> keys pressed
if (data.slice(i, i + 3) === '\x1b[A') { // up arrow
if (historyIndex > 0) {
historyIndex--;
updateCommandBuffer(history[historyIndex]);
}
} else if (data.slice(i, i + 3) === '\x1b[B') { // down arrow
if (historyIndex < history.length - 1) {
historyIndex++;
updateCommandBuffer(history[historyIndex]);
} else {
historyIndex = history.length;
updateCommandBuffer('');
}
}
else if (data.slice(i, i + 3) === '\x1b[C') { // right arrow
// not implemented
}
else if (data.slice(i, i + 3) === '\x1b[D') { // left arrow
// not implemented
}
i += 2;
} else { // push everything else into the line buffer and echo back to user
// if we are at the end of the line,
// move up a row and to the beginning of the line
if (CurX === MaxX - 1) {
term.write('\r\n');
}
lineBuffer.push(c);
term.write(c);
}
}
}
terminal.onData(data => simpleShell(terminal, data));
function executeCommandSingle(command) {
// Empty function for now
//terminal.write(`Executing command: ${command}\r\n`);
// split command and get first token
const command0 = command.split(' ')[0];
const command1 = command.split(' ')[1];
if (command0 === 'help') {
terminal.write('Available Local Commands:\r\n');
terminal.write(' \x1b[32mhelp -\x1b[0m Show this help message\r\n');
terminal.write(' \x1b[32mclear -\x1b[0m Clear the terminal\r\n');
terminal.write(' \x1b[32mtools -\x1b[0m Show a list of remote tools\r\n');
terminal.write(' \x1b[32msize -\x1b[0m Show or Set terminal size (i.e size, size 45x100)\r\n');
terminal.write(' \x1b[32mrun <args> -\x1b[0m Run a remote tool in the current terminal\r\n');
terminal.write(' \x1b[32mfork <args> -\x1b[0m Run a remote tool in a new terminal\r\n');
terminal.write(prompt);
} else if (command0 === 'clear') {
terminal.clear();
terminal.write(prompt);
} else if (command0 === 'tools') {
list_tools(terminal);
} else if (command0 === 'reset') {
truerows = 45;
truecols = 150;
if (terminalContainer.clientWidth <= 1024) {
truecols = 100;
}
terminal.clear();
terminal.resize(truerows, truecols);
fitTerminal()
intro(terminal);
} else if (command0 === 'size') {
if (command1 === undefined) {
terminal.write("Terminal size: " + truerows + "x" + truecols + "\r\n");
terminal.write(prompt);
return
}
const r = command1.split('x')[0];
const c = command1.split('x')[1];
if (r === undefined || c === undefined) {
terminal.write(prompt);
return
}
// resize the terminal based on r and c
terminal.resize(r, c);
truerows = r;
truecols = c;
fitTerminal()
terminal.write(prompt);
} else if (command0 === 'fork') {
const args = ("run " + command.split(' ').slice(1).join(' '));
const url = new URL(window.location.href);
const finalurl = url.origin + url.pathname + "?cmd=" + encodeURIComponent(args);
window.open(finalurl, '_blank');
terminal.write(prompt);
} else if (command0 === 'run') {
const args = command.split(' ').slice(1).join(' ');
disableInput = true;
execute_remote_tool(terminal, args);
} else {
// check if the command is a tool
if (toollist.includes(command0))
{
disableInput = true;
execute_remote_tool(terminal, command);
}
else
{
terminal.write(`\x1b[31mCommand not found:\x1b[0m ${command0}\r\n`);
terminal.write(prompt);
}
}
}
function executeCommand(command) {
if (command.includes(';')) {
const commands = command.split(';');
commands.forEach((cmd) => {
// Execute each command in the list and trim any leading/trailing whitespace
executeCommandSingle(cmd.trim());
});
} else {
executeCommandSingle(command);
}
}
function updateCommandBuffer(command) {
// Clear current line
terminal.write('\r'+prompt + ' '.repeat(lineBuffer.length) + '\r'+prompt);
// push every character in the command to the lineBuffer
lineBuffer = command.split('');
// Convert the command string into an array of characters
let commandArray = command.split('');
let promptlen = stripAnsiCodes(prompt).length
let i = promptlen;
while (i < commandArray.length) {
if (i % terminal.cols === 0 && i !== 0 ) {
commandArray.splice(i-promptlen-1, 0, '\r\n');
}
i++;
}
terminal.write(commandArray.join(''));
}
// Apply custom CSS to make the scrollbar invisible
const terminalElement = document.querySelector('#terminal .xterm-viewport');
if (terminalElement) {
terminalElement.style.overflowY = 'hidden';
}
});

233
build.sh Executable file
View File

@ -0,0 +1,233 @@
#!/bin/bash
logo() {
echo -e "\x1b[33m"
echo -e " ██ ▄▄▄▄▄▄▄▄▄▄▄▄▄ ▐██ ▄█▄ ██ ▄█▄ ▄▄ "
echo -e " ██▌ ▀▀ ▐██ ███ ███▌ ███ ███▌ ████ "
echo -e " ▐██ ███ ███▌ ████ ███▌ ████ ▄█ ██ "
echo -e " ██▌ ▄███▄▄▄▄▄ ▐████ █████ ▐████▄ █████ █▀ ██▌ "
echo -e " ▐██ ▐██ ██ ▐██ ▄█▀ ███ ██ ██▄ ▄█▀ ███ ██████████ "
echo -e " ███ ███ ███ ████▀ ██▌ ▐██ ████▀ ██▌ ▄██ ▀██ "
echo -e " ██▌ ▄▄ ██▌ ██ ██▌ ██ ██▌ ▄██ ███ "
echo -e " ▀██████▀▀▀ ▀█████▀▀▀ ▐██ ██▌ ██ ██▌ ██▀ ██▀\x1b[0m "
echo -e ""
echo -e " Response Streaming CLI Tools on AWS Lambda "
echo -e ""
echo -e "Lemma build and deploy script"
echo -e ""
}
function choose_aws_region() {
local choice
while true; do
read -p "Choose an AWS region to deploy to [default: us-east-1]: " choice
choice=${choice:-us-east-1}
if [[ "$choice" =~ ^[a-zA-Z0-9-]+$ ]]; then
# return the choice
aws_region=$choice
break
else
echo "Invalid choice. Please enter a valid AWS region."
fi
done
}
function choose_architecture() {
local choice
while true; do
read -p "Choose architecture (arm64 or x86_64) [default: arm64]: " choice
choice=${choice:-arm64}
if [[ "$choice" == "arm64" || "$choice" == "x86_64" ]]; then
# return the choice
arch=$choice
break
else
echo "Invalid choice. Please enter 'arm64' or 'x86_64'."
fi
done
}
function lambda_timeout() {
local choice
while true; do
read -p "Choose lambda timeout (1-900 seconds) [default: 300]: " choice
choice=${choice:-300}
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le 900 ]; then
# return the choice
lambda_timeout=$choice
break
else
echo "Invalid choice. Please enter a number between 1 and 900."
fi
done
}
function lambda_memory() {
local choice
while true; do
read -p "Choose lambda memory limit (multiples of 64, 128-10240 MB) [default: 256]: " choice
choice=${choice:-256}
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 128 ] && [ "$choice" -le 10240 ] && [ "$(($choice % 64))" -eq 0 ]; then
# return the choice
lambda_memory=$choice
break
else
echo "Invalid choice. Please enter a number between 128 and 10240, a multiple of 64 only."
fi
done
}
function install_tools() {
# ask a Y/N question to the user if they want to install tools, default is Y
local choice
while true; do
read -p "Do you want to install tools into the lambda package? [Y/n]: " choice
choice=${choice:-Y}
if [[ "$choice" == "Y" || "$choice" == "y" ]]; then
# run the install tools script
echo "Installing tools..."
./tools/install_tools.sh $arch
echo -e "Tools installed\n"
break
elif [[ "$choice" == "N" || "$choice" == "n" ]]; then
break
else
echo "Invalid choice. Please enter 'Y' or 'N'."
fi
done
}
function remove_template_ask() {
# ask a Y/N question to the user if they want to install tools, default is Y
local choice
while true; do
read -p "template.yaml exists, create a new template? [y/N]: " choice
choice=${choice:-N}
if [[ "$choice" == "Y" || "$choice" == "y" ]]; then
rm -f template.yaml
echo -e "Template removed\n"
break
elif [[ "$choice" == "N" || "$choice" == "n" ]]; then
echo -e ""
break
else
echo "Invalid choice. Please enter 'Y' or 'N'."
fi
done
}
logo
if [ -f /.dockerenv ]; then
echo "Lemma Build and Deploy..."
else
echo "Docker Build and Run..."
docker build -t lemma .
if [ "$1" == "delete" ]; then
docker run -it --rm -v ~/.aws:/root/.aws -v .:/lambda \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
lemma /lambda/build.sh delete
exit 0
fi
# forward AWS credentials to the container in both .aws and environment variables
docker run -it --rm -v ~/.aws:/root/.aws -v .:/lambda \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
lemma /lambda/build.sh
exit 0
fi
# check if arg1 is 'delete'
if [ "$1" == "delete" ]; then
echo "Removing Lemma Lambda from AWS... (Please wait, this may take a while)"
sam delete --no-prompts > delete.log 2>&1
# check if it fails
if [ $? -ne 0 ]; then
echo -e "\x1b[31mRemoval failed. Check delete.log for more information.\x1b[0m"
exit 1
fi
echo -e "Removal successful\n"
exit 0
fi
# check if template.yaml exists
if [ -f template.yaml ]; then
remove_template_ask
fi
# if template.yaml doesn't exist, run these functions
if [ ! -f template.yaml ]; then
cp -f ./templates/samconfig.toml .
choose_aws_region
echo -e "AWS region specified: $aws_region\n"
# replace %REGION% with aws_region
sed -i "s/%REGION%/$aws_region/g" samconfig.toml
choose_architecture
echo -e "Architecture specified: $arch\n"
lambda_timeout
echo -e "Lambda timeout specified: $lambda_timeout\n"
lambda_memory
echo -e "Lambda memory specified: $lambda_memory\n"
# generate a random API key
api_key=$(openssl rand -hex 8)
#check if arch is arm64
if [ "$arch" == "arm64" ]; then
cp ./templates/template_arm64.yaml ./template.yaml
else
cp ./templates/template_x86.yaml ./template.yaml
fi
# replace %MEMORY% with lambda_memory
sed -i "s/%MEMORY%/$lambda_memory/g" template.yaml
# replace %TIMEOUT% with lambda_timeout
sed -i "s/%TIMEOUT%/$lambda_timeout/g" template.yaml
# replace %API_KEY% with api_key
sed -i "s/%API_KEY%/$api_key/g" template.yaml
fi
arch=$(grep -A 1 'Architectures:' template.yaml | awk '/- / {print $2}')
api_key=$(grep -A 5 'Environment:' template.yaml | grep 'LEMMA_API_KEY:' | awk '{print $2}')
install_tools
rm -rf .aws-sam
echo "Building Lemma Lambda... (Please wait, this may take a while)"
sam build > build.log 2>&1
# check if it fails
if [ $? -ne 0 ]; then
echo -e "\x1b[31mBuild failed. Check build.log for more information.\x1b[0m"
exit 1
fi
echo -e "Build successful\n"
echo "Deploying Lemma Lambda to AWS... (Please wait, this may take a while)"
sam deploy > deploy.log 2>&1
# check if it fails
if [ $? -ne 0 ]; then
echo -e "\x1b[31mDeployment failed. Check deploy.log for more information.\x1b[0m"
exit 1
fi
echo -e "Deployment successful\n"
echo -e "To remove the Lambda, run: \x1b[32m./build.sh delete\x1b[0m"
echo -e "To update the Lambda with new tools re-run \x1b[32m./build.sh\x1b[0m\n"
URL=$(tr -d '\n ' < deploy.log | sed -n 's/.*LEMMA_URL:\(https:\/\/[^ ]*\.aws\/\).*/\1/p')
echo -e "Your Lemma Lambda URL is: \x1b[32m$URL?key=$api_key\x1b[0m"

BIN
images/build.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 MiB

BIN
images/demo.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

BIN
images/demo2.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

BIN
images/demo3.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
images/e1.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

BIN
images/e2.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
images/e3.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

BIN
images/e4.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
images/e5.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 MiB

BIN
images/lemma.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

0
lemma/__init__.py Normal file
View File

28
lemma/__main__.py Normal file
View File

@ -0,0 +1,28 @@
from lemma.settings import get_settings, get_profiler
from lemma.pipeline import Pipeline
from lemma.input_adapter import InputAdapter
from lemma.output_adapter import OutputAdapter
import sys
def main():
settings = get_settings()
if settings.remote_command == "":
print("error: no remote command specified", file=sys.stderr)
exit(1)
if not settings.stdin_pipe_exists and settings.args.per_stdin:
print("error: cannot invoke per stdin line, stdin has no pipe", file=sys.stderr)
exit(1)
pipe = Pipeline(
settings=settings,
input_adapter=InputAdapter(),
output_adapter=OutputAdapter(),
)
pipe.run()
if __name__ == "__main__":
main()

77
lemma/input_adapter.py Executable file
View File

@ -0,0 +1,77 @@
from lemma.settings import get_settings
from select import select
import sys
def format_command(command, index, stdin_data):
if (('%STDIN%' in command) and ("\n" in stdin_data.strip())):
print("error: cannot place stdin onto remote command with newline characters", file=sys.stderr)
exit(1)
command = command.replace(r"%INDEX%", str(index))
command = command.replace(r"%STDIN%", stdin_data.strip())
return command
class InputAdapter:
def __init__(self):
self.settings = get_settings()
self.done = False
self.stdin_closed = False
self.count = 0
def readline_stdin(self):
# Check if there is any data to read from stdin
ready_to_read, _, _ = select([sys.stdin], [], [], 1.0)
if sys.stdin in ready_to_read:
line = sys.stdin.readline()
if line == "":
self.stdin_closed = True
return None
return line
else:
return None
def read_stdin(self):
if self.stdin_closed:
return None
data = sys.stdin.read()
self.stdin_closed = True
return data
def process(self, command_queue):
if self.done:
return
# lambdas are either invoked per stdin line or per the invocations argument value
if self.settings.args.per_stdin:
line = self.readline_stdin()
if line is not None:
remote_command = self.settings.remote_command
command_queue.put((format_command(remote_command, self.count, line),line,))
self.count += 1
if self.stdin_closed:
self.done = True
elif self.settings.args.div_stdin:
stdin_data = ""
if self.settings.stdin_pipe_exists:
stdin_data = self.read_stdin()
remote_command = self.settings.remote_command
# we need to divide the stdin data into div_stdin equal parts at the newline boundary
parts = stdin_data.split('\n')
parts_len = len(parts)
# create step and round up to the nearest integer
step = -(-parts_len // int(self.settings.args.div_stdin))
for i in range(0, parts_len, step):
stdin_data = '\n'.join(parts[i:i+step])
command_queue.put((format_command(remote_command, self.count, stdin_data),stdin_data,))
self.count += 1
self.done = True
else:
stdin_data = ""
if self.settings.stdin_pipe_exists:
stdin_data = self.read_stdin()
remote_command = self.settings.remote_command
for i in range(int(self.settings.args.invocations)):
command_queue.put((format_command(remote_command, i, stdin_data),stdin_data,))
self.done = True

108
lemma/lambda_worker.py Executable file
View File

@ -0,0 +1,108 @@
import time
import select
from lemma.settings import get_settings
import requests
import urllib.parse
import random
class LambdaService:
def __init__(self, worker):
self.worker = worker
def invoke(self, command, stdin):
# URL encode the command
command = urllib.parse.quote(command)
body_data = stdin
url = get_settings().lambda_url + '/runtool?cmd=' + command
if get_settings().args.verbose:
url += '&verbose=true'
if get_settings().args.no_stderr:
url += '&no_stderr=true'
# Set the LEMMA_API_KEY cookie
cookies = {'LEMMA_API_KEY': get_settings().lambda_key}
# Check if feeding stdin is enabled and if it is then add the stdin to the request body
if get_settings().args.omit_stdin:
body_data = ""
with requests.Session() as session:
# Perform a POST request to the Lambda URL with streaming enabled
with session.post(url, cookies=cookies, data=body_data, stream=True) as response:
# Ensure the request was successful
try:
response.raise_for_status()
except:
print(response.status_code)
return response.status_code
# Get the X-Lemma-Timeout header
timeout = response.headers.get('X-Lemma-Timeout')
if timeout:
timeout = float(timeout)
start_time = time.time()
buffer = ""
sock = response.raw._fp.fp.raw._sock # Get the raw socket from the response
x = response.raw.stream(4096, decode_content=False)
# Process the stream in chunks
while True:
rlist, _, _ = select.select([sock], [], [], 0.01)
# Break the loop if no more data to read
if not rlist:
if timeout and (time.time() - start_time) > timeout:
# LEMMA timeout exceeded, bail out the thread
break
continue
try:
chunk = next(x)
except StopIteration:
break
decoded_chunk = chunk.decode('utf-8')
if get_settings().args.line_buffered:
buffer += decoded_chunk
while '\n' in buffer:
line, buffer = buffer.split('\n', 1)
self.worker.push(line + '\n')
else:
self.worker.push(decoded_chunk)
if buffer:
self.worker.push(buffer)
return response.status_code
class LambdaWorker:
stop = False
def __init__(self, command_queue, stdout_queue):
self.command_queue = command_queue
self.stdout_queue = stdout_queue
self.idle = True
def push(self, stdoutitem):
self.stdout_queue.put(stdoutitem)
def run(self):
while True:
if LambdaWorker.stop:
break
try:
command, stdin = self.command_queue.get_nowait()
except:
continue
self.idle = False
LambdaService(self).invoke(command, stdin)
self.command_queue.task_done()
self.idle = True
@classmethod
def stop_all_workers(cls):
LambdaWorker.stop = True

12
lemma/logo.py Executable file
View File

@ -0,0 +1,12 @@
logo = """\x1b[33m
\x1b[0m
Response Streaming CLI Tools on AWS Lambda
"""

13
lemma/output_adapter.py Executable file
View File

@ -0,0 +1,13 @@
from lemma.settings import get_settings
class OutputAdapter:
def __init__(self):
self.settings = get_settings()
def process(self, stdout_queue):
if (stdout_queue.empty()):
return
data = stdout_queue.get()
print(data, end='', flush=True)
stdout_queue.task_done()

42
lemma/pipeline.py Executable file
View File

@ -0,0 +1,42 @@
import sys, time
from threading import Thread
from queue import Queue
from lemma.lambda_worker import LambdaWorker
class Pipeline:
def __init__(self, settings, input_adapter, output_adapter):
self.settings = settings
self.input_adapter = input_adapter
self.output_adapter = output_adapter
self.command_queue = Queue()
self.stdout_queue = Queue()
self.worker_pool = []
def pool_create(self):
self.worker_pool = []
for i in range(int(self.settings.args.workers)):
worker = LambdaWorker(self.command_queue, self.stdout_queue)
self.worker_pool.append(worker)
thread = Thread(target=LambdaWorker.run, args=(worker,), daemon=True)
thread.start()
def pool_idle(self):
return all(worker.idle for worker in self.worker_pool)
def pool_stop(self):
LambdaWorker.stop_all_workers()
def queues_empty(self):
return self.command_queue.empty() and self.stdout_queue.empty()
def run(self):
self.pool_create()
while True:
self.input_adapter.process(self.command_queue)
self.output_adapter.process(self.stdout_queue)
time.sleep(0.01)
if self.input_adapter.done and self.queues_empty() and self.pool_idle():
self.pool_stop()
break

180
lemma/settings.py Executable file
View File

@ -0,0 +1,180 @@
from lemma.logo import logo
from functools import lru_cache
from argparse import ArgumentParser, REMAINDER, RawDescriptionHelpFormatter
from configparser import ConfigParser
import urllib.parse
import requests
import os, sys
import cProfile
def tools(settings):
print('Available Remote Tools:')
url = settings.lambda_url + 'tools.json'
try:
response = requests.get(url, cookies={'LEMMA_API_KEY': settings.lambda_key},timeout=5)
except:
print(' \u001b[31m<could not access lambda url>\u001b[0m', file=sys.stderr)
sys.exit(1)
if response.status_code != 200:
print(' \u001b[31m<could not access lambda url>\u001b[0m', file=sys.stderr)
sys.exit(1)
tools = response.json()
for tool in tools:
print(' \u001b[38;2;145;231;255m' + tool + '\u001b[0m')
@lru_cache(maxsize=1)
def get_args():
parser = ArgumentParser(description=logo,formatter_class=RawDescriptionHelpFormatter)
parser.add_argument('-w', '--workers', default=1, help='Number of concurrent Lambda service workers')
parser.add_argument('-l', '--lambda-url', help='Prompt user to enter a new lambda url', action='store_true')
parser.add_argument('-i', '--invocations', default=1, help='The number of invocations of the remote command')
parser.add_argument('-p', '--per-stdin', help='Invoke the remote command for each line of stdin (-i is ignored)', action='store_true')
parser.add_argument('-d', '--div-stdin', help='Divide stdin into DIV_STDIN parts at a newline boundary and invoke on each (-i is ignored)')
parser.add_argument('-o', '--omit-stdin', help='Omit stdin to the remote command stdin', action='store_true')
parser.add_argument('-e', '--no-stderr', help='prevent stderr from being streamed into response', action='store_true')
parser.add_argument('-b', '--line-buffered', help='Stream only line chunks to stdout', action='store_true')
parser.add_argument('-v', '--verbose', help='Enable verbose remote output', action='store_true')
parser.add_argument('-t', '--tools', help='List available tools', action='store_true')
parser.add_argument('remote_command', help='lemma <options> -- remote_command',nargs=REMAINDER)
args = parser.parse_args()
return args, parser
def validate_args(settings):
parser = settings.parser
args = settings.args
if len(sys.argv) == 1:
parser.print_help()
sys.stdout.write('\n')
tools(settings)
sys.exit(1)
if args.tools:
tools(settings)
sys.exit(0)
# validate that -d and -p are not used together
if args.div_stdin and args.per_stdin:
print('error: -d and -p cannot be used together', file=sys.stderr)
sys.exit(1)
if args.div_stdin and args.omit_stdin:
print('error: -d and -o cannot be used together', file=sys.stderr)
sys.exit(1)
# args.div_stdin must be a non-zero positive integer
if args.div_stdin:
try:
if int(args.div_stdin) <= 0:
raise ValueError
except:
print('error: -d must be a non-zero positive integer', file=sys.stderr)
sys.exit(1)
class Settings:
def __init__(self):
# Parse cli arguments
args, parser = get_args()
self.config = ConfigParser()
self._load_config()
if args.lambda_url:
self.ask_config()
def ask_config(self):
self.lambda_url = input('Please enter the URL of the Lambda service: ')
def _load_config(self):
newconfig = False
config_dir_path = os.path.expanduser('~/.lemma')
config_file_path = os.path.join(config_dir_path, 'lemma.ini')
# check if config_dir_path exists
if not os.path.exists(config_file_path):
newconfig = True
# Ensure the directory exists
os.makedirs(config_dir_path, exist_ok=True)
# Ensure the file exists
open(config_file_path, 'a').close()
self.config.read([config_file_path])
if newconfig:
print('Welcome to Lemma! we could not find a configuration file, so lets create one for you.')
self.ask_config()
def _save_config(self):
config_dir_path = os.path.expanduser('~/.lemma')
config_file_path = os.path.join(config_dir_path, 'lemma.ini')
with open(config_file_path, 'w') as configfile:
self.config.write(configfile)
@property
def args(self):
args, _ = get_args()
return args
@property
def parser(self):
_, parser = get_args()
return parser
@property
def remote_command(self):
remote_command = '--'.join(((' '.join(self.args.remote_command)).split('--')[1:]))
return remote_command.strip()
@property
def lambda_url(self):
lurl = self.config.get('DEFAULT', 'lambda_url', fallback=None)
if lurl is None:
# errpr no lambda url
print('error: no lambda url specified', file=sys.stderr)
# parse the url and get the hostname
parsed = urllib.parse.urlparse(lurl)
return parsed.scheme + '://' + parsed.netloc + '/'
@property
def lambda_key(self):
lurl = self.config.get('DEFAULT', 'lambda_url', fallback=None)
if lurl is None:
# errpr no lambda url
print('error: no lambda url specified', file=sys.stderr)
# parse the url and get the query variable named "key"
parsed = urllib.parse.urlparse(lurl)
query = urllib.parse.parse_qs(parsed.query)
return query.get('key', [''])[0]
@lambda_url.setter
def lambda_url(self, value):
self.config.set('DEFAULT', 'lambda_url', value)
self._save_config()
@property
def stdin_pipe_exists(self) -> bool:
return not sys.stdin.isatty()
@lru_cache(maxsize=1)
def get_settings()->Settings:
s = Settings()
validate_args(s)
return s
@lru_cache(maxsize=1)
def get_profiler():
return cProfile.Profile()

20
setup.py Normal file
View File

@ -0,0 +1,20 @@
from setuptools import setup
setup(
name='lemmacli',
version='0.1.0',
packages=['lemma'],
include_package_data=True,
install_requires=[
"requests",
],
entry_points={
'console_scripts': [
'lemma=lemma.__main__:main',
],
},
classifiers=[
'Programming Language :: Python :: 3',
'Operating System :: OS Independent',
],
python_requires='>=3.6',
)

View File

@ -0,0 +1,40 @@
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Lemma Lambda Function
Globals:
Function:
Timeout: %TIMEOUT%
Resources:
LemmaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: run.sh
Runtime: python3.12
MemorySize: %MEMORY%
Architectures:
- arm64
Environment:
Variables:
AWS_LAMBDA_EXEC_WRAPPER: /opt/bootstrap
AWS_LWA_INVOKE_MODE: response_stream
LEMMA_API_KEY: %API_KEY%
LEMMA_TIMEOUT: %TIMEOUT%
HOME: /tmp
PORT: 8000
Layers:
- !Sub arn:aws:lambda:${AWS::Region}:753240598075:layer:LambdaAdapterLayerArm64:22
FunctionUrlConfig:
AuthType: NONE
InvokeMode: RESPONSE_STREAM
Outputs:
LemmaFunctionUrl:
Description: "Lemma Lambda Function URL"
Value: !Sub "LEMMA_URL:${LemmaFunctionUrl.FunctionUrl}"
LemmaFunction:
Description: "Lemma Lambda Function ARN"
Value: !GetAtt LemmaFunction.Arn

View File

@ -0,0 +1,40 @@
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Lemma Lambda Function
Globals:
Function:
Timeout: %TIMEOUT%
Resources:
LemmaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: run.sh
Runtime: python3.12
MemorySize: %MEMORY%
Architectures:
- x86_64
Environment:
Variables:
AWS_LAMBDA_EXEC_WRAPPER: /opt/bootstrap
AWS_LWA_INVOKE_MODE: response_stream
LEMMA_API_KEY: %API_KEY%
LEMMA_TIMEOUT: %TIMEOUT%
HOME: /tmp
PORT: 8000
Layers:
- !Sub arn:aws:lambda:${AWS::Region}:753240598075:layer:LambdaAdapterLayerX86:22
FunctionUrlConfig:
AuthType: NONE
InvokeMode: RESPONSE_STREAM
Outputs:
LemmaFunctionUrl:
Description: "Lemma Lambda Function URL"
Value: !Sub "LEMMA_URL:${LemmaFunctionUrl.FunctionUrl}"
LemmaFunction:
Description: "Lemma Lambda Function ARN"
Value: !GetAtt LemmaFunction.Arn

19
tools/config/.gau.toml Normal file
View File

@ -0,0 +1,19 @@
threads = 2
verbose = false
retries = 15
subdomains = false
parameters = false
providers = ["wayback","commoncrawl","otx","urlscan"]
blacklist = ["ttf","woff","svg","png","jpg"]
json = false
[urlscan]
apikey = ""
[filters]
from = ""
to = ""
matchstatuscodes = []
matchmimetypes = []
filterstatuscodes = []
filtermimetypes = ["image/png", "image/jpg", "image/svg+xml"]

67
tools/config/dirsearch.ini Executable file
View File

@ -0,0 +1,67 @@
[general]
threads = 25
recursive = False
deep-recursive = False
force-recursive = False
recursion-status = 200-399,401,403
max-recursion-depth = 0
exclude-subdirs = %%ff/,.;/,..;/,;/,./,../,%%2e/,%%2e%%2e/
random-user-agents = False
max-time = 0
exit-on-error = False
# subdirs = /,api/
# include-status = 200-299,401
# exclude-status = 400,500-999
# exclude-sizes = 0b,123gb
# exclude-text = "Not found"
# exclude-regex = "^403$"
# exclude-redirect = "*/error.html"
# exclude-response = 404.html
# skip-on-status = 429,999
[dictionary]
default-extensions = php,aspx,jsp,html,js
force-extensions = False
overwrite-extensions = False
lowercase = False
uppercase = False
capitalization = False
# exclude-extensions = old,log
# prefixes = .,admin
# suffixes = ~,.bak
# wordlists = /path/to/wordlist1.txt,/path/to/wordlist2.txt
[request]
http-method = get
follow-redirects = False
# headers-file = /path/to/headers.txt
# user-agent = MyUserAgent
# cookie = SESSIONID=123
[connection]
timeout = 7.5
delay = 0
max-rate = 0
max-retries = 1
## By disabling `scheme` variable, dirsearch will automatically identify the URI scheme
# scheme = http
# proxy = localhost:8080
# proxy-file = /path/to/proxies.txt
# replay-proxy = localhost:8000
[advanced]
crawl = False
[view]
full-url = False
quiet-mode = False
color = True
show-redirects-history = False
[output]
## Support: plain, simple, json, xml, md, csv, html, sqlite
report-format = plain
autosave-report = False
autosave-report-folder = /tmp
# log-file = /path/to/dirsearch.log
# log-file-size = 50000000

10
tools/config/test.txt Executable file
View File

@ -0,0 +1,10 @@
11111111111111111
22222222222222222
33333333333333333
44444444444444444
55555555555555555
66666666666666666
77777777777777777
88888888888888888
99999999999999999
00000000000000000

19
tools/demo Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env python3
import sys
import time
inp = ' '.join(sys.argv[1:])
if (len(inp.strip()) == 0):
inp = "Hello World! I am executing on AWS Lambda. This is a demo of response streaming."
sys.stdout.write("\x1b[42m")
for char in inp:
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(0.02)
sys.stdout.write("\x1b[0m")
sys.stdout.write("\n")
sys.stdout.flush()

11
tools/ffuf Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
export XDG_DATA_HOME=/tmp/.local/share
export XDG_CONFIG_HOME=/tmp/.config
export XDG_CACHE_HOME=/tmp/.cache
# get the current directory of where this script is located
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# run ffuf with the new environment variables and the provided arguments
$DIR/bin/ffuf "$@"

4
tools/gau Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
cp -f ./tools/config/.gau.toml /tmp/.gau.toml
./tools/bin/gau $@

108
tools/install_tools.sh Executable file
View File

@ -0,0 +1,108 @@
#!/bin/bash
# Function to display usage message
usage() {
echo "Usage: $0 <arch>"
echo "arch must be either x86_64 or arm64"
exit 1
}
# Check if argument is provided
if [ -z "$1" ]; then
echo "Error: No architecture specified."
usage
fi
# Check if the argument is valid
if [ "$1" != "x86_64" ] && [ "$1" != "arm64" ]; then
echo "Error: Invalid architecture specified."
usage
fi
# If the argument is valid, proceed with the script
arch="$1"
echo "Architecture specified: $arch"
#cd ..
rm -rf ./app/tools
rm -f ./app/tool_requirements.txt
cp -rf ./tools ./app/
mkdir -p ./app/tools/bin
mkdir -p ./app/tools/wordlists
touch ./app/tool_requirements.txt
if [ "$arch" == "x86_64" ]; then
echo "Installing ffuf..."
tmpdir=$(mktemp -d)
wget https://github.com/ffuf/ffuf/releases/download/v2.1.0/ffuf_2.1.0_linux_amd64.tar.gz -O $tmpdir/ffuf.tar.gz > /dev/null 2>&1
tar -xvf $tmpdir/ffuf.tar.gz -C $tmpdir > /dev/null 2>&1
mv $tmpdir/ffuf ./app/tools/bin/
echo "Installing httpx..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/httpx/releases/download/v1.6.5/httpx_1.6.5_linux_amd64.zip -O $tmpdir/httpx.zip > /dev/null 2>&1
unzip $tmpdir/httpx.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/httpx ./app/tools
echo "Installing gau..."
tmpdir=$(mktemp -d)
wget https://github.com/lc/gau/releases/download/v2.2.3/gau_2.2.3_linux_amd64.tar.gz -O $tmpdir/gau.tar.gz > /dev/null 2>&1
tar -xvf $tmpdir/gau.tar.gz -C $tmpdir > /dev/null 2>&1
mv $tmpdir/gau ./app/tools/bin/
echo "Installing subfinder..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/subfinder/releases/download/v2.6.6/subfinder_2.6.6_linux_amd64.zip -O $tmpdir/subfinder.zip > /dev/null 2>&1
unzip $tmpdir/subfinder.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/subfinder ./app/tools
echo "Installing dnsx..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/dnsx/releases/download/v1.2.1/dnsx_1.2.1_linux_amd64.zip -O $tmpdir/dnsx.zip > /dev/null 2>&1
unzip $tmpdir/dnsx.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/dnsx ./app/tools
elif [ "$arch" == "arm64" ]; then
echo "Installing ffuf..."
tmpdir=$(mktemp -d)
wget https://github.com/ffuf/ffuf/releases/download/v2.1.0/ffuf_2.1.0_linux_arm64.tar.gz -O $tmpdir/ffuf.tar.gz > /dev/null 2>&1
tar -xvf $tmpdir/ffuf.tar.gz -C $tmpdir > /dev/null 2>&1
mv $tmpdir/ffuf ./app/tools/bin/
echo "Installing httpx..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/httpx/releases/download/v1.6.5/httpx_1.6.5_linux_arm64.zip -O $tmpdir/httpx.zip > /dev/null 2>&1
unzip $tmpdir/httpx.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/httpx ./app/tools
echo "Installing gau..."
tmpdir=$(mktemp -d)
wget https://github.com/lc/gau/releases/download/v2.2.3/gau_2.2.3_linux_arm64.tar.gz -O $tmpdir/gau.tar.gz > /dev/null 2>&1
tar -xvf $tmpdir/gau.tar.gz -C $tmpdir > /dev/null 2>&1
mv $tmpdir/gau ./app/tools/bin/
echo "Installing subfinder..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/subfinder/releases/download/v2.6.6/subfinder_2.6.6_linux_arm64.zip -O $tmpdir/subfinder.zip > /dev/null 2>&1
unzip $tmpdir/subfinder.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/subfinder ./app/tools
echo "Installing dnsx..."
tmpdir=$(mktemp -d)
wget https://github.com/projectdiscovery/dnsx/releases/download/v1.2.1/dnsx_1.2.1_linux_arm64.zip -O $tmpdir/dnsx.zip > /dev/null 2>&1
unzip $tmpdir/dnsx.zip -d $tmpdir > /dev/null 2>&1
mv $tmpdir/dnsx ./app/tools
fi
echo "Installing smuggler..."
git clone https://github.com/defparam/smuggler ./app/tools/bin/smuggler > /dev/null 2>&1
echo "Installing SecLists's common.txt wordlist..."
wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/common.txt -O ./app/tools/wordlists/common.txt > /dev/null 2>&1
rm -rf ./app/tools/install_tools.sh

2
tools/runner Executable file
View File

@ -0,0 +1,2 @@
#!/bin/bash
$@

4
tools/smuggler Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
/usr/bin/env python3 $DIR/bin/smuggler/smuggler.py $@