Add benchmarking helper tool (#957)

- Adding a new tool `FixtureBenchmark` alongside `FixtureGenerator`
- The new tool allows measuring how long generation takes
- The tool also allows benchmarking against a reference
- See README.md for more details

- This will enable having a workflow checkout two versions of Tuist and benchmark them
  - e.g. a PR against master or the latest release
- Additionally the benchmark tool can be used standalone during development / profiling / testing

Test Plan:
- Test out the benchmark tool using some of the examples specified in the README
This commit is contained in:
Kas 2020-02-17 20:41:57 +00:00 committed by GitHub
parent 0a68c89c72
commit e1c122d2c7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 1039 additions and 93 deletions

View File

@ -3,14 +3,14 @@ name: Fixture Generator
on:
push:
paths:
- fixtures/fixture_generator/**/*
- tools/fixturegen/**/*
- Sources/ProjectDescription/**/*
- .github/workflows/fixture-generator.yml
- .github/workflows/fixturegen.yml
pull_request:
paths:
- fixtures/fixture_generator/**/*
- tools/fixturegen/**/*
- Sources/ProjectDescription/**/*
- .github/workflows/fixture-generator.yml
- .github/workflows/fixturegen.yml
jobs:
test:
@ -21,13 +21,13 @@ jobs:
- name: Select Xcode 11.2.1
run: sudo xcode-select -switch /Applications/Xcode_11.2.1.app
- name: Build Package
working-directory: ./fixtures/fixture_generator
working-directory: ./tools/fixturegen
run: swift build
- name: Generate Fixture
working-directory: ./fixtures/fixture_generator
run: swift run FixtureGenerator --projects 1 --targets 1 --sources 1
working-directory: ./tools/fixturegen
run: swift run fixturegen --projects 1 --targets 1 --sources 1
- name: Build Tuist
run: swift build
- name: Generate Fixture Project
run: swift run tuist generate --path ./fixtures/fixture_generator/Fixture
run: swift run tuist generate --path ./tools/fixturegen/Fixture

24
.github/workflows/tuistbench.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Tuist Benchmark
on:
push:
paths:
- tools/tuistbench/**/*
- .github/workflows/tuistbench.yml
pull_request:
paths:
- tools/tuistbench/**/*
- .github/workflows/tuistbench.yml
jobs:
test:
name: Build
runs-on: macOS-latest
steps:
- uses: actions/checkout@v1
- name: Select Xcode 11.2.1
run: sudo xcode-select -switch /Applications/Xcode_11.2.1.app
- name: Build Package
working-directory: ./tools/tuistbench
run: swift build

View File

@ -5,6 +5,8 @@ excluded:
- fixtures
- Tests
- .build
- tools/fixturegen/.build
- tools/tuistbench/.build
disabled_rules:
- trailing_whitespace
- trailing_comma

View File

@ -7,6 +7,7 @@ Please, check out guidelines: https://keepachangelog.com/en/1.0.0/
### Added
- Add FAQ section by @mollyIV
- Add benchmarking helper tool https://github.com/tuist/tuist/pull/957 by @kwridan.
### Fixed
@ -16,9 +17,6 @@ Please, check out guidelines: https://keepachangelog.com/en/1.0.0/
### Changed
- Update XcodeProj to 7.8.0 https://github.com/tuist/tuist/pull/create?base=tuist%3Amaster&head=tuist%3Atarget-attributes by @pepibumur.
### Changed
- Path sorting speed gains https://github.com/tuist/tuist/pull/980 by @adamkhazi
## 1.2.0

View File

@ -3,10 +3,6 @@
This folder contains sample projects we use in the integration and acceptance tests.
Please keep this keep in alphabetical order.
## fixture_generator
This is a Swift Package that generates a large application / workspace to stress test Tuist. The generated fixture itself is not checked in as it can vary based on the test being conducted.
## invalid_workspace_manifest_name
Contains a single file `Workspac.swift`, incorrectly named workspace manifest file.

View File

@ -1,25 +0,0 @@
{
"object": {
"pins": [
{
"package": "llbuild",
"repositoryURL": "https://github.com/apple/swift-llbuild.git",
"state": {
"branch": null,
"revision": "f73b84bc1525998e5e267f9d830c1411487ac65e",
"version": "0.2.0"
}
},
{
"package": "SwiftPM",
"repositoryURL": "https://github.com/apple/swift-package-manager",
"state": {
"branch": null,
"revision": "9abcc2260438177cecd7cf5185b144d13e74122b",
"version": "0.5.0"
}
}
]
},
"version": 1
}

View File

@ -1,24 +0,0 @@
# Fixture Generator
A tool to generate large fixtures for the purposes of stress testing Tuist,
# Usage
```sh
swift run FixtureGenerator --projects 10 --targets 10 --sources 500
```
Options:
- `projects`: Number of projects to generate
- `targets`: Number of targets to generate
- `sources`: Number of sources to generate
# Features
- [x] Control number of projects
- [x] Control number of targets
- [x] Control number of sources
- [ ] Add pre-compiled libraries
- [ ] Add pre-compiled frameworks
- [ ] Add pre-compiled xcframeworks

View File

@ -0,0 +1,16 @@
{
"object": {
"pins": [
{
"package": "swift-tools-support-core",
"repositoryURL": "https://github.com/apple/swift-tools-support-core",
"state": {
"branch": null,
"revision": "693aba4c4c9dcc4767cc853a0dd38bf90ad8c258",
"version": "0.0.1"
}
}
]
},
"version": 1
}

View File

@ -5,9 +5,15 @@ import PackageDescription
let package = Package(
name: "FixtureGenerator",
platforms: [
.macOS(.v10_13),
],
products: [
.executable(name: "fixturegen", targets: ["FixtureGenerator"]),
],
dependencies: [
// Dependencies declare other packages that this package depends on.
.package(url: "https://github.com/apple/swift-package-manager", from: "0.5.0"),
.package(url: "https://github.com/apple/swift-tools-support-core", from: "0.0.1"),
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
@ -15,8 +21,8 @@ let package = Package(
.target(
name: "FixtureGenerator",
dependencies: [
"SPMUtility",
"SwiftToolsSupport",
]
),
)
]
)

View File

@ -0,0 +1,25 @@
# Fixture Generator
A tool to generate large fixtures for the purposes of stress testing Tuist,
## Usage
```sh
swift run fixturegen --projects 10 --targets 10 --sources 500
```
Options:
- `--path`: The path to generate the fixture in
- `--projects`, `-p`: Number of projects to generate
- `--targets`, `-t`: Number of targets to generate
- `--sources`, `-s`: Number of sources to generate
## Features
- [x] Control number of projects
- [x] Control number of targets
- [x] Control number of sources
- [ ] Add pre-compiled libraries
- [ ] Add pre-compiled frameworks
- [ ] Add pre-compiled xcframeworks

View File

@ -1,7 +1,6 @@
import Basic
import Foundation
import SPMUtility
import TSCBasic
import TSCUtility
enum GenerateCommandError: Error {
case invalidPath
@ -24,7 +23,7 @@ final class GenerateCommand {
usage: "The path where the fixture will be generated.",
completion: .filename)
projectsArgument = parser.add(option: "--projects",
shortName: "-s",
shortName: "-p",
kind: Int.self,
usage: "Number of projects to generate.")
targetsArgument = parser.add(option: "--targets",

View File

@ -1,6 +1,6 @@
import Basic
import Foundation
import TSCBasic
class Generator {
private let fileSystem: FileSystem
private let config: GeneratorConfig
@ -34,7 +34,7 @@ class Generator {
name: String,
projects: [String]) throws {
let manifestPath = path.appending(component: "Workspace.swift")
let manifest = manifestTemplate.generate(workspaceName: name,
projects: projects)
try fileSystem.writeFileContents(manifestPath,

View File

@ -1,4 +1,3 @@
import Foundation
struct GeneratorConfig {
@ -9,6 +8,6 @@ struct GeneratorConfig {
extension GeneratorConfig {
static var `default`: GeneratorConfig {
return GeneratorConfig(projects: 5, targets: 10, sources: 50)
GeneratorConfig(projects: 5, targets: 10, sources: 50)
}
}

View File

@ -1,4 +1,3 @@
import Foundation
class ManifestTemplate {
@ -24,7 +23,7 @@ class ManifestTemplate {
targets: [
{Targets}
])
"""
private let targetTemplate = """
@ -45,25 +44,25 @@ class ManifestTemplate {
"""
func generate(workspaceName: String, projects: [String]) -> String {
return workspaceTemplate
workspaceTemplate
.replacingOccurrences(of: "{WorkspaceName}", with: workspaceName)
.replacingOccurrences(of: "{Projects}", with: generate(projects: projects))
}
func generate(projectName: String, targets: [String]) -> String {
return projectTemplate
projectTemplate
.replacingOccurrences(of: "{ProjectName}", with: projectName)
.replacingOccurrences(of: "{Targets}", with: generate(targets: targets))
}
private func generate(projects: [String]) -> String {
return projects.map {
projects.map {
workspaceProjectTemplate.replacingOccurrences(of: "{Project}", with: $0)
}.joined(separator: ",\n")
}
private func generate(targets: [String]) -> String {
return targets.map {
targets.map {
targetTemplate.replacingOccurrences(of: "{TargetName}", with: $0)
}.joined(separator: ",\n")
}

View File

@ -1,4 +1,3 @@
import Foundation
class SourceTemplate {
@ -7,18 +6,18 @@ class SourceTemplate {
public class {FrameworkName}SomeClass{Number} {
public init() {
}
public func hello() {
}
}
"""
func generate(frameworkName: String, number: Int) -> String {
return template
template
.replacingOccurrences(of: "{FrameworkName}", with: frameworkName)
.replacingOccurrences(of: "{Number}", with: "\(number)")
}

View File

@ -1,11 +1,11 @@
import Basic
import Foundation
import SPMUtility
import TSCBasic
import TSCUtility
func main() throws {
let parser = ArgumentParser(commandName: "FixtureGenerator",
usage: "FixtureGenerator <options>",
overview: "Generates large fixtures for the purposes of stress testing Tuist")
let parser = ArgumentParser(commandName: "fixturegen",
usage: "<options>",
overview: "Generates large fixtures for the purposes of stress testing Tuist.")
let fileSystem = localFileSystem
let generateCommand = GenerateCommand(fileSystem: fileSystem,

7
tools/tuistbench/.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
.DS_Store
/.build
/Packages
/*.xcodeproj
xcuserdata/
Fixture
.swiftpm

View File

@ -0,0 +1,16 @@
{
"object": {
"pins": [
{
"package": "swift-tools-support-core",
"repositoryURL": "https://github.com/apple/swift-tools-support-core",
"state": {
"branch": null,
"revision": "693aba4c4c9dcc4767cc853a0dd38bf90ad8c258",
"version": "0.0.1"
}
}
]
},
"version": 1
}

View File

@ -0,0 +1,28 @@
// swift-tools-version:5.1
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "TuistBenchmark",
platforms: [
.macOS(.v10_13),
],
products: [
.executable(name: "tuistbench", targets: ["TuistBenchmark"]),
],
dependencies: [
// Dependencies declare other packages that this package depends on.
.package(url: "https://github.com/apple/swift-tools-support-core", from: "0.0.1"),
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
// Targets can depend on other targets in this package, and on products in packages which this package depends on.
.target(
name: "TuistBenchmark",
dependencies: [
"SwiftToolsSupport",
]
),
]
)

161
tools/tuistbench/README.md Normal file
View File

@ -0,0 +1,161 @@
# Tuist Benchmark
A tool to time & benchmark tuist's commands against a set of fixtures
## Usage
**Measurement (single fixture):**
```sh
swift run tuistbench --binary /path/to/local/tuist --fixture /path/to/fixture
```
**Benchmark (single fixture):**
```sh
swift run tuistbench --binary /path/to/local/tuist --reference-binary /path/to/master/tuist --fixture /path/to/fixture
```
**Benchmark (multiple fixtures):**
```sh
swift run tuistbench --binary /path/to/local/tuist --reference-binary /path/to/master/tuist --fixture-list /path/to/fixtures.json
```
`fixtures.json` example:
```json
{
"paths": [
"/path/to/fixtures/fixture_a",
"/path/to/fixtures/fixture_b"
]
}
```
**Options:**
- `--binary`,`-b`: Path to the tuist binary (usually the local one)
- `--reference-binary`, `-r`: Path to the reference tuist binary to benchmark against (usually master or latest release)
- `--fixture`, `-f`: Path to the fixture to use for benchmarking (The directory that contains the project or workspace manifest)
- `--fixture-list`, `-l`: Path to the fixture list json file (this contains a list of fixture paths)
- `--format`: The output format (`console` or `markdown`)
- `--config`, `-c`: Path the configuration override json file.
- `arguments`: The arguments to use when invoking the binary (eg. `[generate]`)
- `runs`: The number of times to perform a measurement (final results are the average of those runs)
- `deltaThreshold`: The time interval threshold that measurements must exceed to be considered different (unit is `TimeInterval` / `Double` seconds)
`deltaThreshold` example:
When `deltaThreshold` is `0.02`
- new measurement: `1.20`s
- old measurement: `1.21`s
- The results consider those measurements approximately equal `≈`
- new measurement: `1.20`s
- old measurement: `1.23`s
- The results will display a delta of `-0.03`s
`config.json` example:
```
{
"arguments": ["generate"],
"runs": 5,
"deltaThreshold": 0.02
}
```
## Example Output
**Measurement (single fixture):**
Console:
```sh
$ swift run tuistbench -b $(which tuist) -f /path/to/fixtures/ios_app_with_tests
Fixture : ios_app_with_tests
Runs : 5
Result
- cold : 0.72s
- warm : 0.74s
```
Markdown:
```sh
$ swift run tuistbench -b $(which tuist) -f /path/to/ios_app_with_tests --format markdown
```
| Fixture | Cold | Warm |
| ------------------ | ------| ----- |
| ios_app_with_tests | 0.72s | 0.72s |
**Benchmark (multiple fixtures):**
`fixtures.json`:
```json
{
"paths": [
"/path/to/fixtures/ios_app_with_tests",
"/path/to/fixtures/ios_app_with_carthage_frameworks",
"/path/to/fixtures/ios_app_with_helpers"
]
}
```
Console:
```sh
$ swift run tuistbench -b /path/to/tuist/.build/release/tuist -r $(which tuist) -l fixtures.json
Fixture : ios_app_with_tests
Runs : 5
Result
- cold : 0.79s vs 0.80s (≈)
- warm : 0.75s vs 0.79s (⬇︎ 0.04s 5.63%)
Fixture : ios_app_with_carthage_frameworks
Runs : 5
Result
- cold : 0.78s vs 0.86s (⬇︎ 0.08s 8.90%)
- warm : 0.76s vs 0.80s (⬇︎ 0.04s 5.05%)
Fixture : ios_app_with_helpers
Runs : 5
Result
- cold : 2.24s vs 2.37s (⬇︎ 0.12s 5.18%)
- warm : 2.03s vs 2.11s (⬇︎ 0.07s 3.55%)
```
Markdown:
```sh
$ swift run tuistbench -b /path/to/tuist/.build/release/tuist -r $(which tuist) -l fixtures.json --format markdown
```
| Fixture | New | Old | Delta |
| --------------- | ------ | ---- | -------- |
| ios_app_with_tests _(cold)_ | 0.73s | 0.79s | ⬇︎ 7.92% |
| ios_app_with_tests _(warm)_ | 0.79s | 0.79s | ≈ |
| ios_app_with_carthage_frameworks _(cold)_ | 0.79s | 0.85s | ⬇︎ 7.36% |
| ios_app_with_carthage_frameworks _(warm)_ | 0.77s | 0.81s | ⬇︎ 5.26% |
| ios_app_with_helpers _(cold)_ | 2.29s | 2.43s | ⬇︎ 5.80% |
| ios_app_with_helpers _(warm)_ | 1.97s | 2.15s | ⬇︎ 8.05% |
## Features
- [x] Measure cold and warm runs for `tuist generate`
- [x] Specify individual fixture paths
- [x] Specify multiple fixture paths (via `.json` file)
- [x] Basic console results output
- [x] Markdown results output (for use on GitHub)
- [x] Custom configuration to tweak tuist command, number of runs and delta threshold
- [x] Average cold and warm runs

View File

@ -0,0 +1,44 @@
import Foundation
import TSCBasic
struct BenchmarkResult {
var fixture: String
var results: MeasureResult
var reference: MeasureResult
var coldRunsDelta: TimeInterval {
results.coldRuns.average() - reference.coldRuns.average()
}
var warmRunsDelta: TimeInterval {
results.warmRuns.average() - reference.warmRuns.average()
}
}
final class Benchmark {
private let fileHandler: FileHandler
private let binaryPath: AbsolutePath
private let referenceBinaryPath: AbsolutePath
init(fileHandler: FileHandler,
binaryPath: AbsolutePath,
referenceBinaryPath: AbsolutePath) {
self.fileHandler = fileHandler
self.binaryPath = binaryPath
self.referenceBinaryPath = referenceBinaryPath
}
func benchmark(runs: Int,
arguments: [String],
fixturePath: AbsolutePath) throws -> BenchmarkResult {
let a = Measure(fileHandler: fileHandler, binaryPath: binaryPath)
let b = Measure(fileHandler: fileHandler, binaryPath: referenceBinaryPath)
let results = try a.measure(runs: runs, arguments: arguments, fixturePath: fixturePath)
let reference = try b.measure(runs: runs, arguments: arguments, fixturePath: fixturePath)
return BenchmarkResult(fixture: fixturePath.basename,
results: results,
reference: reference)
}
}

View File

@ -0,0 +1,20 @@
import Foundation
struct BenchmarkConfig: Decodable {
/// Arguments to use when running the binary (e.g. `generate`)
var arguments: [String]
/// Number of runs to performs (final results are the average of all those runs)
var runs: Int
/// The time interval threshold that must be exceeded to record a delta
/// any measurements below this threshold are treated as
var deltaThreshold: TimeInterval
/// Default benchmarking configuration
static var `default`: BenchmarkConfig {
BenchmarkConfig(arguments: ["generate"],
runs: 5,
deltaThreshold: 0.02)
}
}

View File

@ -0,0 +1,7 @@
import Foundation
struct Fixtures: Decodable {
/// Paths to fixtures
/// - Note: can be Absolute or relative to current working directory
var paths: [String]
}

View File

@ -0,0 +1,98 @@
import Foundation
import TSCBasic
struct MeasureResult {
var fixture: String
var coldRuns: [TimeInterval]
var warmRuns: [TimeInterval]
}
enum MeasureError: LocalizedError {
case commandFailed(command: [String])
var errorDescription: String? {
switch self {
case let .commandFailed(command):
return "Command returned non 0 exit code '\(command.joined(separator: " "))'"
}
}
}
final class Measure {
private let fileHandler: FileHandler
private let binaryPath: AbsolutePath
init(fileHandler: FileHandler,
binaryPath: AbsolutePath) {
self.fileHandler = fileHandler
self.binaryPath = binaryPath
}
func measure(runs: Int,
arguments: [String],
fixturePath: AbsolutePath) throws -> MeasureResult {
let cold = try measureColdRuns(runs: runs, arguments: arguments, fixturePath: fixturePath)
let warm = try measureWarmRuns(runs: runs, arguments: arguments, fixturePath: fixturePath)
return MeasureResult(fixture: fixturePath.basename,
coldRuns: cold,
warmRuns: warm)
}
private func measureColdRuns(runs: Int,
arguments: [String],
fixturePath: AbsolutePath) throws -> [TimeInterval] {
try (0 ..< runs).map { _ in
try withTemporaryDirectory(removeTreeOnDeinit: true) { temporaryDirectoryPath in
let temporaryPath = temporaryDirectoryPath.appending(component: "fixture")
try fileHandler.copy(path: fixturePath, to: temporaryPath)
return try measure {
try run(arguments: arguments,
in: temporaryPath)
}
}
}
}
private func measureWarmRuns(runs: Int,
arguments: [String],
fixturePath: AbsolutePath) throws -> [TimeInterval] {
try withTemporaryDirectory(removeTreeOnDeinit: true) { temporaryDirectoryPath in
let temporaryPath = temporaryDirectoryPath.appending(component: "fixture")
try fileHandler.copy(path: fixturePath, to: temporaryPath)
// first warm up isn't included in the results
try run(arguments: arguments,
in: temporaryPath)
return try measure(runs: runs) {
try run(arguments: arguments,
in: temporaryPath)
}
}
}
private func run(arguments: [String], in path: AbsolutePath) throws {
let process = Process()
process.executableURL = binaryPath.asURL
process.arguments = arguments
let pipe = Pipe()
process.standardOutput = pipe
process.currentDirectoryPath = path.pathString
try process.run()
process.waitUntilExit()
guard process.terminationStatus == 0 else {
throw MeasureError.commandFailed(command: [binaryPath.basename] + arguments)
}
}
private func measure(runs: Int, code: () throws -> Void) throws -> [TimeInterval] {
try (0 ..< runs).map { _ in
try measure(code: code)
}
}
private func measure(code: () throws -> Void) throws -> TimeInterval {
let start = Date()
try code()
return Date().timeIntervalSince(start)
}
}

View File

@ -0,0 +1,168 @@
import Foundation
import TSCBasic
import TSCUtility
enum BenchmarkCommandError: LocalizedError {
case missing(description: String)
var errorDescription: String? {
switch self {
case let .missing(description: description):
return "Missing \(description)."
}
}
}
enum BenchmarkResultFormat: String, CaseIterable, StringEnumArgument {
case console
case markdown
static var completion: ShellCompletion {
.values(
BenchmarkResultFormat.allCases.map {
(value: $0.rawValue, description: $0.rawValue)
}
)
}
}
final class BenchmarkCommand {
private let configPathOption: OptionArgument<PathArgument>
private let formatOption: OptionArgument<BenchmarkResultFormat>
private let fixtureListPathOption: OptionArgument<PathArgument>
private let fixturePathOption: OptionArgument<PathArgument>
private let binaryPathOption: OptionArgument<PathArgument>
private let referenceBinaryPathOption: OptionArgument<PathArgument>
private let fileHandler: FileHandler
init(fileHandler: FileHandler,
parser: ArgumentParser) {
self.fileHandler = fileHandler
configPathOption = parser.add(option: "--config",
shortName: "-c",
kind: PathArgument.self,
usage: "The path to the benchmarking configuration json file.",
completion: .filename)
formatOption = parser.add(option: "--format",
kind: BenchmarkResultFormat.self,
usage: "The output format of the benchmark results.")
fixtureListPathOption = parser.add(option: "--fixture-list",
shortName: "-l",
kind: PathArgument.self,
usage: "The path to the fixtures list json file.",
completion: .filename)
fixturePathOption = parser.add(option: "--fixture",
shortName: "-f",
kind: PathArgument.self,
usage: "The path to the fixture to user for benchmarking.",
completion: .filename)
binaryPathOption = parser.add(option: "--binary",
shortName: "-b",
kind: PathArgument.self,
usage: "The path to the binary to benchmark.",
completion: .filename)
referenceBinaryPathOption = parser.add(option: "--reference-binary",
shortName: "-r",
kind: PathArgument.self,
usage: "The path to the binary to use as a reference for the benchmark.",
completion: .filename)
}
func run(with arguments: ArgumentParser.Result) throws {
let configPath = arguments.get(configPathOption)?.path
let format = arguments.get(formatOption) ?? .console
let fixturePath = arguments.get(fixturePathOption)?.path
let fixtureListPath = arguments.get(fixtureListPathOption)?.path
let referenceBinaryPath = arguments.get(referenceBinaryPathOption)?.path
guard let binaryPath = arguments.get(binaryPathOption)?.path else {
throw BenchmarkCommandError.missing(description: "binary path")
}
let config: BenchmarkConfig = try configPath.map(parseConfig) ?? .default
let fixtures = try getFixturePaths(fixturesListPath: fixtureListPath,
fixturePath: fixturePath)
let renderer = makeRenderer(for: format,
config: config)
if let referenceBinaryPath = referenceBinaryPath {
let results = try benchmark(config: config,
fixtures: fixtures,
binaryPath: binaryPath,
referenceBinaryPath: referenceBinaryPath)
renderer.render(results: results)
} else {
let results = try measure(config: config,
fixtures: fixtures,
binaryPath: binaryPath)
renderer.render(results: results)
}
}
private func measure(config: BenchmarkConfig,
fixtures: [AbsolutePath],
binaryPath: AbsolutePath) throws -> [MeasureResult] {
let measure = Measure(fileHandler: fileHandler,
binaryPath: binaryPath)
let results = try fixtures.map {
try measure.measure(runs: config.runs,
arguments: config.arguments,
fixturePath: $0)
}
return results
}
private func benchmark(config: BenchmarkConfig,
fixtures: [AbsolutePath],
binaryPath: AbsolutePath,
referenceBinaryPath: AbsolutePath) throws -> [BenchmarkResult] {
let benchmark = Benchmark(fileHandler: fileHandler,
binaryPath: binaryPath,
referenceBinaryPath: referenceBinaryPath)
let results = try fixtures.map {
try benchmark.benchmark(runs: config.runs,
arguments: config.arguments,
fixturePath: $0)
}
return results
}
private func getFixturePaths(fixturesListPath: AbsolutePath?,
fixturePath: AbsolutePath?) throws -> [AbsolutePath] {
if let fixturePath = fixturePath {
return [fixturePath]
}
if let fixturesListPath = fixturesListPath {
let fixtures = try parseFixtureList(path: fixturesListPath)
return fixtures.paths.map {
AbsolutePath($0, relativeTo: fileHandler.currentPath)
}
}
return []
}
private func makeRenderer(for option: BenchmarkResultFormat, config: BenchmarkConfig) -> Renderer {
switch option {
case .console:
return ConsoleRenderer(deltaThreshold: config.deltaThreshold)
case .markdown:
return MarkdownRenderer(deltaThreshold: config.deltaThreshold)
}
}
private func parseConfig(path: AbsolutePath) throws -> BenchmarkConfig {
let decoder = JSONDecoder()
let data = try fileHandler.contents(of: path)
return try decoder.decode(BenchmarkConfig.self, from: data)
}
private func parseFixtureList(path: AbsolutePath) throws -> Fixtures {
let decoder = JSONDecoder()
let data = try fileHandler.contents(of: path)
return try decoder.decode(Fixtures.self, from: data)
}
}

View File

@ -0,0 +1,57 @@
import Foundation
final class ConsoleRenderer: Renderer {
private let deltaThreshold: TimeInterval
init(deltaThreshold: TimeInterval) {
self.deltaThreshold = deltaThreshold
}
func render(results: [MeasureResult]) {
results.forEach(render)
}
func render(results: [BenchmarkResult]) {
results.forEach(render)
}
private func render(result: MeasureResult) {
let cold = format(result.coldRuns.average())
let warm = format(result.warmRuns.average())
print("""
Fixture : \(result.fixture)
Runs : \(result.coldRuns.count)
Result
- cold : \(cold)s
- warm : \(warm)s
""")
}
private func render(result: BenchmarkResult) {
let cold = format(result.results.coldRuns.average())
let warm = format(result.results.warmRuns.average())
let coldReference = format(result.reference.coldRuns.average())
let warmReference = format(result.reference.warmRuns.average())
let coldDelta = delta(first: result.results.coldRuns.average(),
second: result.reference.coldRuns.average(),
threshold: deltaThreshold)
let warmDelta = delta(first: result.results.warmRuns.average(),
second: result.reference.warmRuns.average(),
threshold: deltaThreshold)
print("""
Fixture : \(result.fixture)
Runs : \(result.results.coldRuns.count)
Result
- cold : \(cold)s vs \(coldReference)s (\(coldDelta))
- warm : \(warm)s vs \(warmReference)s (\(warmDelta))
""")
}
}

View File

@ -0,0 +1,61 @@
import Foundation
final class MarkdownRenderer: Renderer {
private let deltaThreshold: TimeInterval
init(deltaThreshold: TimeInterval) {
self.deltaThreshold = deltaThreshold
}
func render(results: [MeasureResult]) {
let rows = results.flatMap(render)
print("""
| Fixture | Cold | Warm |
| ------------------ | ---- | ---- |
\(rows.joined(separator: "\n"))
""")
}
func render(results: [BenchmarkResult]) {
let rows = results.flatMap(render)
print("""
| Fixture | New | Old | Delta |
| --------------- | ------ | ---- | -------- |
\(rows.joined(separator: "\n"))
""")
}
private func render(result: MeasureResult) -> [String] {
let cold = format(result.coldRuns.average())
let warm = format(result.warmRuns.average())
return [
"| \(result.fixture) | \(cold)s | \(warm)s |",
]
}
private func render(result: BenchmarkResult) -> [String] {
let cold = format(result.results.coldRuns.average())
let warm = format(result.results.warmRuns.average())
let coldReference = format(result.reference.coldRuns.average())
let warmReference = format(result.reference.warmRuns.average())
let coldDelta = delta(first: result.results.coldRuns.average(),
second: result.reference.coldRuns.average(),
threshold: deltaThreshold)
let warmDelta = delta(first: result.results.warmRuns.average(),
second: result.reference.warmRuns.average(),
threshold: deltaThreshold)
return [
"| \(result.fixture) _(cold)_ | \(cold)s | \(coldReference)s | \(coldDelta) |",
"| \(result.fixture) _(warm)_ | \(warm)s | \(warmReference)s | \(warmDelta) |",
]
}
}

View File

@ -0,0 +1,40 @@
import Foundation
protocol Renderer {
func render(results: [MeasureResult])
func render(results: [BenchmarkResult])
}
// MARK: -
extension Renderer {
private var formatter: NumberFormatter {
let formatter = NumberFormatter()
formatter.roundingMode = .halfUp
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
formatter.minimumIntegerDigits = 1
return formatter
}
func format(_ double: Double) -> String {
formatter.string(for: double) ?? ""
}
func delta(first: TimeInterval,
second: TimeInterval,
threshold: TimeInterval) -> String {
let delta = first - second
let prefix = delta > 0 ? "+" : ""
let percentageString = prefix + format((delta / second) * 100)
let deltaString = prefix + format(delta)
if delta > threshold {
return "⬆︎ \(deltaString)s \(percentageString)%"
} else if delta < -threshold {
return "⬇︎ \(deltaString)s \(percentageString)%"
} else {
return ""
}
}
}

View File

@ -0,0 +1,7 @@
import Foundation
extension Array where Element == Double {
func average() -> Double {
reduce(0, +) / Double(count)
}
}

View File

@ -0,0 +1,22 @@
import Foundation
import TSCBasic
final class FileHandler {
private let fileManager: FileManager = .default
var currentPath: AbsolutePath {
AbsolutePath(fileManager.currentDirectoryPath)
}
func copy(path: AbsolutePath, to: AbsolutePath) throws {
try fileManager.copyItem(atPath: path.pathString, toPath: to.pathString)
}
func exists(path: AbsolutePath) -> Bool {
fileManager.fileExists(atPath: path.pathString)
}
func contents(of path: AbsolutePath) throws -> Data {
try Data(contentsOf: path.asURL)
}
}

View File

@ -0,0 +1,24 @@
import Foundation
import TSCBasic
import TSCUtility
func main() throws {
let parser = ArgumentParser(commandName: "tuistbench",
usage: "<options>",
overview: "A utility to benchmark running tuist against a set of fixtures.")
let fileHandler = FileHandler()
let generateCommand = BenchmarkCommand(fileHandler: fileHandler,
parser: parser)
let arguments = ProcessInfo.processInfo.arguments
let results = try parser.parse(Array(arguments.dropFirst()))
try generateCommand.run(with: results)
}
do {
try main()
} catch {
print("Error: \(error.localizedDescription)")
exit(1)
}

View File

@ -0,0 +1,172 @@
---
name: Performance
order: 11
excerpt: This page describes the project's performance testing strategy.
---
# Performance Testing
To test out generation speeds and to provide some utilities to aid profiling Tuist a few auxiliary standalone tools are available.
- `fixturegen`: A tool to generate large fixtures
- `tuistbench`: A tool to benchmark Tuist
Those tools are located within the [`tools/`](https://github.com/tuist/tuist/blob/master/tools) directory.
## Fixture Generator (`fixturegen`)
`fixturegen` allows generating large fixtures. For example it can generate a workspace with 10 projects, each project with 10 targets, and each target with 500 source files!
Example:
```sh
swift run fixturegen --projects 100 --targets 10 --sources 500
```
Generating those large fixtures can be helpful in profiling Tuist and identifying any hot spots that may otherwise go unnoticed when generating smaller fixtures during development.
## Tuist Benchmark (`tuistbench`)
`tuistbench` has a few modes of operation:
- Measure the generation time of one or more fixtures
- Benchmark two Tuist binaries' generation time of one or more fixtures
The benchmark mode can provide a general idea of how changes impact generation time. For example benchmarking a pull request against master or the latest release.
The results are averaged from several **cold** and **warm** runs where:
- **cold**: Is a generation from a clean slate (no xcodeproj files exist)
- **warm**: Is a re-generation (xcodeproj files already exist)
Here are some example outputs from `tuistbench`.
**Measurement (single fixture):**
Console format:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
--fixture /path/to/fixtures/ios_app_with_tests
Fixture : ios_app_with_tests
Runs : 5
Result
- cold : 0.72s
- warm : 0.74s
```
Markdown format:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
--fixture /path/to/fixtures/ios_app_with_tests \
--format markdown
```
| Fixture | Cold | Warm |
| ------------------ | ------| ----- |
| ios_app_with_tests | 0.72s | 0.72s |
**Benchmark (single fixture):**
Console format:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
--reference-binary $(which tuist) \
--fixture /path/to/fixtures/ios_app_with_tests
Fixture : ios_app_with_tests
Runs : 5
Result
- cold : 0.79s vs 0.80s (≈)
- warm : 0.75s vs 0.79s (⬇︎ 0.04s 5.63%)
```
Markdown format:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
--reference-binary $(which tuist) \
--fixture /path/to/fixtures/ios_app_with_tests \
--format markdown
```
| Fixture | New | Old | Delta |
| --------------- | ------ | ---- | -------- |
| ios_app_with_tests _(cold)_ | 0.73s | 0.79s | ⬇︎ 7.92% |
| ios_app_with_tests _(warm)_ | 0.79s | 0.79s | ≈ |
**Benchmark (multiple fixtures):**
A fixture list `json` file is needed to specify multiple fixtures, here's an example:
```json
{
"paths": [
"/path/to/fixtures/ios_app_with_tests",
"/path/to/fixtures/ios_app_with_carthage_frameworks",
"/path/to/fixtures/ios_app_with_helpers"
]
}
```
Console:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
--reference-binary $(which tuist) \
--fixture-list fixtures.json
Fixture : ios_app_with_tests
Runs : 5
Result
- cold : 0.79s vs 0.80s (≈)
- warm : 0.75s vs 0.79s (⬇︎ 0.04s 5.63%)
Fixture : ios_app_with_carthage_frameworks
Runs : 5
Result
- cold : 0.78s vs 0.86s (⬇︎ 0.08s 8.90%)
- warm : 0.76s vs 0.80s (⬇︎ 0.04s 5.05%)
Fixture : ios_app_with_helpers
Runs : 5
Result
- cold : 2.24s vs 2.37s (⬇︎ 0.12s 5.18%)
- warm : 2.03s vs 2.11s (⬇︎ 0.07s 3.55%)
```
Markdown:
```sh
swift run tuistbench \
--binary /path/to/tuist/.build/release/tuist \
-reference-binary $(which tuist) \
--fixture-list fixtures.json \
--format markdown
```
| Fixture | New | Old | Delta |
| --------------- | ------ | ---- | -------- |
| ios_app_with_tests _(cold)_ | 0.73s | 0.79s | ⬇︎ 7.92% |
| ios_app_with_tests _(warm)_ | 0.79s | 0.79s | ≈ |
| ios_app_with_carthage_frameworks _(cold)_ | 0.79s | 0.85s | ⬇︎ 7.36% |
| ios_app_with_carthage_frameworks _(warm)_ | 0.77s | 0.81s | ⬇︎ 5.26% |
| ios_app_with_helpers _(cold)_ | 2.29s | 2.43s | ⬇︎ 5.80% |
| ios_app_with_helpers _(warm)_ | 1.97s | 2.15s | ⬇︎ 8.05% |

View File

@ -18,7 +18,7 @@ There are plenty of ideas floating around what other features Tuist should have
## Focus on improving performance
A common complaint, especially for bigger projects, is the time it takes to generate the project. The main focus until this point was adding features, but time has come to start thinking about performance. End of last year [Kas](https://github.com/kwridan) proposed an [idea](https://github.com/tuist/tuist/issues/820) to start performance testing. First part of this initiative already got [merged](https://github.com/tuist/tuist/pull/890). The [`FixtureGenerator`](https://github.com/tuist/tuist/tree/master/fixtures/fixture_generator) tool will allow us to easily generate a project with potentially hundreds of targets and iron out performance issues much more systematically. It is just the beginning and the plan is eventually to integrate performance testing as part of GitHub checks so we can be sure that adding new features would not have an impact on project generation times.
A common complaint, especially for bigger projects, is the time it takes to generate the project. The main focus until this point was adding features, but time has come to start thinking about performance. End of last year [Kas](https://github.com/kwridan) proposed an [idea](https://github.com/tuist/tuist/issues/820) to start performance testing. First part of this initiative already got [merged](https://github.com/tuist/tuist/pull/890). The [Fixture Generator](https://github.com/tuist/tuist/tree/master/tools/fixturegen) tool will allow us to easily generate a project with potentially hundreds of targets and iron out performance issues much more systematically. It is just the beginning and the plan is eventually to integrate performance testing as part of GitHub checks so we can be sure that adding new features would not have an impact on project generation times.
## Full Changelog