harmony_automation

As I own the day-to-day data updates process for the Sears plan, there can be >100 tickets to process every few weeks.

This module uses Microsoft’s Playwright library to automate scraping, closing, re-assigning, and adding tickets in bulk to our internal ticket system, “Harmony”.


helpers


moveMouseToCenter

 moveMouseToCenter (page:playwright.async_api._generated.Page)

To close the sidebar

Type Details
page Page Page
Returns None

appendReassignedTkt2Yml

 appendReassignedTkt2Yml (t_tkt:str|int, trk_fp:str)

Save to file in case of running over several sessions.

Type Details
t_tkt str | int str | int
trk_fp str str
Returns None

appendClosedTkt2Yml

 appendClosedTkt2Yml (t_tkt:str|int, trk_fp:str)

Save to file in case of running over several sessions.

Type Details
t_tkt str | int str | int
trk_fp str str
Returns None

Playwright


start_page

 start_page ()

-> Page


login

 login (page:playwright.async_api._generated.Page)

-> int

Type Details
page Page
Returns int int - 0 == success

search_ticket

 search_ticket (page:playwright.async_api._generated.Page, tkt:str)

Search by ticket #

Type Details
page Page Page
tkt str str
Returns int int - 0 == success

export_logs

 export_logs (page:playwright.async_api._generated.Page,
              _out_fp:str='./logs.parquet')

Assumes we’re on home page. Gets number of pending tickets, goes to My Logs page, and exports latest 80 tickets to ./logs.parquet

Type Default Details
page Page Page
_out_fp str ./logs.parquet str
Returns None

scrape_tickets

 scrape_tickets (page:playwright.async_api._generated.Page,
                 scrape_df:pandas.core.frame.DataFrame)

Go to each ticket that needs scraping and scrape all notes/comments, formatted in chronological order.

Type Details
page Page Page
scrape_df DataFrame pd.DataFrame
Returns DataFrame pd.DataFrame

scrape_and_concat

 scrape_and_concat (page:playwright.async_api._generated.Page,
                    existing_dat:pandas.core.frame.DataFrame,
                    exported_logs:pandas.core.frame.DataFrame,
                    _out_fp:str='./scrape_concat.parquet')

Identify logs that need scraping from exported logs. Run export_logs first. Then scrape and concatenate with existing data/notes. Export to ./scrape_concat.parquet.

Type Default Details
page Page Page
existing_dat DataFrame pd.DataFrame
exported_logs DataFrame pd.DataFrame
_out_fp str ./scrape_concat.parquet str
Returns None

automation


close_log

 close_log (page:playwright.async_api._generated.Page, tkt:str,
            trk_fp:str, note:str='Done')

Close individual log.

Type Default Details
page Page Page
tkt str str
trk_fp str str
note str Done str
Returns int 0 - success // 1 - already closed // -1 - error w/ filling resolution

reassign_log

 reassign_log (page:playwright.async_api._generated.Page, tkt:str,
               trk_fp:str, note:str)

Reassign individual log. Edit config to set colleague.

Type Details
page Page Page
tkt str str
trk_fp str str
note str str
Returns None

add_logs

 add_logs (df:pandas.core.frame.DataFrame,
           page:playwright.async_api._generated.Page)

Add logs in bulk.

Type Details
df DataFrame pd.DataFrame - w/ 2 columns: id (Employee #) and note
page Page Page
Returns None