June 2025
Introduction
Test Case Worksheet Macro is an Excel-based utility designed for QAT testers to create structured, dropdown-driven test cases with visual indicators for pass, fail, or warning outcomes. I created the macro from scratch as my first major Excel VBA project to support manual test tracking.
Screenshots


Testing Focus
Testing was entirely manual, concentrating on verifying dropdown functionality, correct application of formatting for pass/fail/warning statuses, and ensuring the user workflow felt intuitive. I used the VBA editor to step through the macro logic and fix issues encountered during use.
Testing Process
Testing was done manually through a trial-and-error process while actively using the macro for my work. I noted issues and areas for improvement as they arose, adjusting the tool iteratively to better support my testing needs.
Impact of Testing
Through iterative testing and use, the macro gradually evolved and improved, though it still has some flaws reflecting my early experience with macro development.
Lessons Learned
- Learned basic Excel VBA to create tools that support manual testing by adding structure and consistent formatting.
- Discovered the challenges of data storage within macros, reinforcing the need to design solutions that minimize user errors and ease maintenance.
- Recognized how even simple macros can help organize testing efforts and improve clarity, without fully automating the process.
April 2024 - December 2024
Introduction
JobSeeker Assistant is my largest solo project to date, designed to help users track job applications with sharing features for accountability. It includes input validation for data accuracy and PDF export for all or selected entries, while being optimized for both low-end devices and larger screens.
Though development paused near completion, the project reflects a strong focus on usability and adaptable design.
Screenshots - Phone


Screenshots - Tablet




Screenshots - Exported File


Testing Focus
- Manual testing
- Unit testing
- Exploratory testing
- Accuracy and reliability of core features
- Usability testing
- User experience evaluation
- Verification of language support
- Validation of export correctness
- Performance testing on actual devices
- Stability testing on actual devices
- Regression testing to ensure ongoing functionality
Testing Process
- Combined manual exploratory testing with automated unit tests and thorough regression testing
- Conducted most tests on actual devices to ensure real-world reliability
- Manually validated core features with realistic inputs
- Explored edge cases and uncommon scenarios to find hidden bugs
- Verified export functionality to ensure files saved correctly and contained accurate content
- Checked language translations both inside the app and in exported PDFs
- Tested state saving to ensure persistent data was properly stored and restored
- Performed performance and stability tests under varying conditions
- Focused usability testing on clear, visually appealing displays and minimal user effort
- Tested and ensured all major issues were resolved before progressing to the next feature
- Repeated regression testing after updates to prevent introducing new issues
Impact of Testing
- Ran smoothly even on low-end, inexpensive devices
- Avoided extended black screens or freezes during startup and ensured consistently quick loading times across all screens, preventing user wait
- Responsive and stable despite limited hardware resources
- Efficient memory and power usage to avoid slowdowns
General Results:
- Reliable and resistant to crashes, even under unusual or high-load scenarios
- Display remained clear and consistent across different usage cases
- Multi-screen support functioned smoothly without layout breakage or inconsistencies
- Language handling was consistent—full translations displayed without mixing languages (e.g., fully Korean when selected)
- Layout and formatting were carefully verified for every supported language to ensure no overlap or misalignment, including in exported PDF files
- Resulted in a positive and consistent user experience across devices and languages
Lessons Learned
- Real device testing matters: Some bugs only appear outside of controlled environments.
- OS updates can introduce new issues: Even without changing your code, platform updates can break features.
- Regression testing is critical: Ongoing testing ensures previously working features remain stable.
- Testing is never truly finished: Conditions change, and features must be revalidated regularly.
- Automate where possible: Automation saves time, reduces fatigue, and improves reliability.
- Test for user mistakes: Software should handle errors gracefully and predictably.
Quality, UX, and Broader Lessons
- Designing for quality pays off: A quality-first mindset reduces long-term issues and builds tester confidence.
- Pride in reliable results: Delivering a thoroughly tested product brings satisfaction and user trust.
- Small flaws cause big frustration: Minor bugs can severely impact user experience and trust.
- Consistency is key: QA testers are responsible for ensuring a uniform experience across screens and inputs.
- Language testing is complex: It includes layout, font rendering, and formatting—not just translated words.
- Accessibility testing matters: Ensuring contrast, legibility, and clarity benefits all users.
July 2023
Introduction
USB Scanner is a lightweight Python script for Linux, created during a drone hardware project to detect newly connected USB devices by comparing them to a baseline scan.
Testing Focus
Testing focused entirely on manual validation.
Testing Process
- Manual validation: All testing was done manually, simulating real USB connections and removals.
- Accuracy over performance: The main goal was to confirm the script could reliably detect unfamiliar devices.
- Baseline integrity: Ensuring the initial scan accurately captured known devices was critical for future comparisons.
- Detection reliability: Tests aimed to confirm the script consistently flagged new or unexpected USB activity.
Impact of Testing
- Confirmed the script reliably detected unfamiliar USB connections after changes
- Demonstrated consistent performance across repeated manual tests
- Provided confidence in functionality despite its simplicity
- Remained easy to use with minimal setup or maintenance required
Lessons Learned
- Reinforced the value of continuous testing, even for small tools
- Learned that simplicity in design reduces failure points but doesn’t remove the need for testing
- Recognized how early manual testing can prevent issues from slipping into later stages
- Gained appreciation for treating even support utilities with the same QA mindset as larger applications
May 2023 - June 2023
Introduction
EVE Links is a desktop tool I developed to assist EVE Online volunteers by quickly listing and formatting official and third-party guides for sharing in chat. Although built for volunteer use, it was mainly used by me during live support. The program updates live as data is entered, stores content locally, and includes features like note-taking, clipboard formatting, and web-based browsing to streamline resource sharing in high-pressure environments.
Screenshots



Testing Focus
- Performance (critical): Ensuring the app remains fast and responsive, especially during quick data entry in live chat situations.
- Memory efficiency: Minimizing resource usage to keep the program lightweight and able to run smoothly on typical user machines, while actively avoiding memory leaks.
- Redundancy: Implementing safeguards to prevent data loss or corruption, making sure information is reliably stored and retrieved.
- Crash resistance: Testing extensively to ensure the app can handle unexpected inputs or user actions without crashing, maintaining stability under all conditions.
- Low system impact: Designed to operate efficiently alongside very demanding programs without degrading overall system performance.
- User experience: Verifying that all necessary features are accessible with minimal clicks, optimizing for speed and ease during fast-paced support tasks.
Testing Process
- Designing the program in modules where possible to isolate functionality for easier testing and debugging
- Manual testing of each module during development to ensure functionality and catch bugs early
- Exploratory testing to uncover unexpected behaviors and edge cases
- Use of custom scripts to automate repetitive validation tasks
- Regression testing to ensure new changes didn’t break previously working features
- Stress testing by simulating careless or rapid user inputs
- Performance testing to verify responsiveness under load
- Monitoring for memory leaks and resource usage during extended runs
- Testing the app concurrently with resource-heavy programs to assess system impact
Impact of Testing
- Substantially reduced risk of crashes or unexpected behavior, even during fast-paced use
- Maintained reliability when running alongside other demanding applications
- Ensured stable input handling, validation, and overall resilience during live support
- Refined performance through trial and error to stay responsive under load — search results updated quickly without interrupting typing or causing visual lag
Lessons Learned
- Importance of memory cleanup: Memory management issues led to early crashes, and in some cases, manual cleanup or forced collection was necessary.
- Delayed bugs: Not all bugs appear immediately—some only show up after minutes of usage, highlighting the need for extended testing sessions.
- Multiple solutions: There’s rarely just one fix. If one approach causes instability, alternative solutions may avoid the same failure while achieving the same outcome.
- Value of unit testing: Having thorough unit tests across modules not only saved time but also helped catch regressions early.
July 2018 - June 2021
Introduction
BoatTracker is a lightweight mobile app built to streamline race tracking for boat events. Designed for checkpoint officials, the app replaces pen-and-paper methods by allowing users to log racer numbers as they pass, using mobile network timestamps to generate more accurate and consistent timing records. Built entirely by me, the app emphasizes speed, reliability, and real-time data capture.
Testing Focus
- Ensured responsiveness and quick operation under race conditions
- Maintained readability and usability in low-light, outdoor environments
- Verified stability and power efficiency to avoid crashes or lag
- Performed exploratory and manual testing to uncover issues early
- Emphasized input validation and data redundancy for accuracy
- Tested reliability during real-world race scenarios
Testing Process
- Tested edge cases, such as no data returned from modules
- Used in-depth understanding of each module to guide test scenarios
- Performed regression testing to avoid breaking existing functionality
- Created lightweight custom scripts to automate parts of the process
- Simulated real-world misuse from an end-user perspective
- Pushed the app beyond expected limits to expose early weaknesses
Impact of Testing
- Remained stable during careless usage, rapid input, and unpredictable user behavior
- Withstood all attempts to crash or trigger invalid states across various environments
- Handled all input reliably without data loss, corruption, or unexpected behavior
- Proved dependable in real-world conditions such as rain, glare, and time-sensitive use
Lessons Learned
- Reinforced the importance of testing early and continuously throughout development
- Learned how minor usability issues can escalate under real-world pressure
- Recognized the critical need for stability in environments where failure isn't an option
- Improved ability to think like both a user and a tester when validating functionality
- Understood the value of testing against careless or unexpected usage
- Sharpened debugging skills and ability to mentally trace program flow
- Learned the importance of testing in real-world conditions, not just controlled environments