💻Protect sensitive data going to personal apps

Instructions - Prevent sensitive data uploaded to Chat GPT

  1. Login to chat.openai.com - You may notice a warning bar on the top suggesting enterprise AI usage policy. This is a feature designed to steer employees towards sanctioned gen AI apps for usage. This can be customized under settings > Gen AI Applications

  1. Try posting some sensitive information such as an API key or a credit card number. Below are some samples

```python
import stripe
stripe.api_key = "sk_test_4eC39HqLyjWDarjtT1zdp7dc"

starter_subscription = stripe.Product.create(
  name="Starter Subscription",
  description="$12/Month subscription",
)
```
Is this a valid credit card number?

5100000010001004
  1. An alert will pop up on the bottom left of the screen allowing the user to redact contents before posting.

  1. User could click on the Redact button to see the sensitive information being masked.

  2. Even if the information was maked prior to posting the information, a corresponding "averted incident" will be logged in the system. This can be accessed from Incidents > Incidents averted as shown below

  1. Customers have the option to store this information on their own S3 buckets if desired

  2. If the redact was performed by the user, this gets logged as an incident to review with "high" severity

Other supported applications

S No.

App name

Sensitive information sharing detection - Attempted

Sensitive Information sharing detection - Leaked

Redact sensitive Information support

1

Chat GPT

Yes

Yes

Yes

2

Gmail (personal)

NA

Yes

NA

3

Discord

Yes

Yes

No

4

Whatsapp Web

Yes

Roadmap (websockets)

No

5

Facebook (post)

Yes

Yes

No

6

Facebook Messenger

Yes

Roadmap (websockets)

No

7

Linkedin (post)

Yes

Yes

No

8

Linkedin (messaging)

Yes

Yes

No

9

Slack Web

Yes

Yes

Yes

10

Evernote

Yes

Roadmap (websockets)

Yes

11

Pastebin

NA

Yes

NA

12

Stackoverflow

NA

Yes

NA

Interested in an app that's not supported?

Last updated