Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,544

SYSTEMS AND METHODS TO GENERATE CORRESPONDENCES BETWEEN PORTIONS OF RECORDED AUDIO CONTENT AND RECORDS OF A COLLABORATION ENVIRONMENT

Final Rejection §103§DP
Filed
Mar 20, 2024
Examiner
LI, LIANG Y
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Asana, Inc.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
167 granted / 273 resolved
+6.2% vs TC avg
Strong +69% interview lift
Without
With
+69.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
26 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 273 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to pending claims 1-20 filed 12/19/2025. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim(s) 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-7 of U.S. Patent No. 11997425 B1. Although the claims at issue are not identical, they are not patentably distinct from each other for the reasons listed below. Current Application Reference Patent (11997425 B1) Claim 1 Claim 1 Claim 2 Claim 2 Claim 3 Claim 3 Claim 4 Claim 4 Claim 5 Claim 5 Claim 6 Claim 6 Claim 7 Claim 7 Claim 8 Claim 1 Claim 9 Claim 1 Claim 10 Claim 1 Claims 11-20 recite similar limitations to the above and are likewise rejected Claim 1-7 Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-12, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hilleli (US 11062270 B2) in view of Zhu (US 7213051 B2) in view of Kishore (US 20210209561 A1). For claim 1, Hilleli discloses: a system configured to generate correspondences between portions of recorded audio content and records of a collaboration environment (figs. 5-7 gives GUI overview), the system comprising: one or more physical processors configured by machine-readable instructions to (fig.11 gives hardware overview): effectuate presentation of an instance of a user interface on a client computing platform associated with a user through which recorded audio content is accessible (fig.5-7), the user interface including a temporal selection portion (fig.5b shows selection of period of time 504 corresponding to action items); obtain user input information conveying user input into the instance of the user interface, the user input including identification of first temporal content and second temporal content within the recorded audio content (fig.5-6 show user input options for selecting and identifying action items for further interaction), the user input being facilitated by user interaction with the temporal selection portion of the user interface (ibid: receiving various interactions to convey feedback); generate, based on the user input information, first correspondence information and second correspondence information, the first correspondence information conveying a first correspondence between the first temporal content of the recorded audio content and a first work unit record managed by a collaboration environment (fig.5-6 show various techniques of updating action items, such as via fig.5C:512 (c.30 ¶3) indicating establishing a correspondence between a fragment of recorded audio content and an action item, the action item being a work unit record stored and managed by the environment for display), and the second correspondence information conveying a second correspondence between the second temporal content of the recorded audio content and a second work unit record managed by the collaboration environment (ibid: likewise for additional items); store the updated indication within the first correspondence information (fig.5C:512 (c.30 ¶3): identifying recorded snippets as action items, fig.6:602 (c.31 ¶3-4): manually adding action items by the user via the interface, the updated indications being stored, such as in fig.2:225 (c.10 ¶2), see also col.7 ¶3); and effectuate presentation of a work unit page on the client computing platform through which the first work unit record is accessible (fig.5A-C constitutes an action item work unit page listing various action items, the action items being accessible via network of fig.1), the work unit page being configured to provide access to the instance of the user interface through which the recorded audio content is accessible (ibid: GUI pages shows providing access to recorded audio contents corresponding to the action items). Hilleli does not disclose: the temporal selection portion configured to direct a playback of the recorded audio content to one or more periods of time to identify temporal content within the recorded audio content; monitor the user interaction with the temporal selection portion to identify an updated indication of position and duration of a period of time within the recorded audio content that identifies the first temporal content; wherein the first and second work unit records was previously assigned to the user. Zhu discloses: a temporal selection portion configured to direct a playback of the recorded audio content to one or more periods of time to identify temporal content within the recorded audio content (fig.21 shows timeline view with conventional playback controls in the toolbar (play, pause, stop, etc.), see also c.62 ¶3 contemplating segment selection including editing segments, c.63 ¶3: moving segments, hence, a timeline UI for directing playback of recorded audio content to periods of time to identify temporal content, such as for editing segments); monitor the user interaction with the temporal selection portion to identify an updated indication of position and duration of a period of time within the recorded audio content that identifies the first temporal content (ibid: editing segments via moving, adjusting range (c.63 ¶3-9), see also fig.25 (c.86 ¶3-6) contemplating adjusting segment start and stopping point). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the system of Hilleli by incorporating the segment timeline GUI of Zhu. Both concern the art of modifying recorded audio content associations, and the incorporation would have, according to Zhu, allow easier storing, recording, and editing of recorded meetings (col.1 “Summary of Invention”). Hilleli modified by Zhu does not disclose: wherein the first and second work unit records was previously assigned to the user. Kishore discloses: wherein the first and second work unit records was previously assigned to the user (0045-47 contemplates tracking and updating of previously assigned action items for presentation (0047)). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the system of Hilleli and Zhu by incorporating the action item tracking technique of Kishore. Both concern the art of communal tracking of action items, and the incorporation would have, according to Kishore, allow briefing by participants at a later time, hence, allowing for an ongoing engagement between meetings, hence, enhancing tracking and reducing forgetting (0047, 0003-4). For claim 2, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli further discloses: wherein the user input further includes identification of the first work unit record and the second work unit record (fig.5C:512-514: identification of action items as associated or disassociated with the snippet; fig.6: identification or disassociation of action items) For claim 5, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli further discloses: store the first correspondence information in the first work unit record, and the second correspondence information in the second work unit record (fig.2:225 (c.10 ¶2), col.7 ¶3 discloses storing of action item work unit records, with fig.5-6 contemplating action items with correspondence information (text, time, notes, etc. for browsing and display). For claim 6, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli further discloses: wherein the work unit page includes a resource identifier that, when selected, causes a digital asset stored in the first work unit record to be accessed, the digital asset corresponding to the recorded audio content (fig.5C shows various selections of resources, including expandable context information, hence, accessing context records associated with the work unit record to be displayed via a resource identifier selector). For claim 7, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli further discloses: train a machine learning model based on the user input information to generate a trained machine learning model, the trained machine learning model being configured to automatically generate correspondences between the temporal content of the recorded audio content and work unit records (fig.8B (col.34 ¶1) shows training based on user feedback a machine learning model for generating action items from temporal audio content). For claim 8, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli further discloses: wherein generation of the first correspondence information causes access to the first temporal content to be provided while accessing the first work unit record via the work unit page (fig.5A-C shows provision of action items and associated generated correspondence information). For claim 9, Hilleli modified by Zhu modified by Kishore discloses the system of claim 1, as described above. Hilleli and Kishore further discloses: manage environment state information maintaining the collaboration environment, the collaboration environment being configured to facilitate interaction by users with the collaboration environment (Hilleli fig.5A-C show various users of a collaboration environment alongside various state variables, parameters, e.g., meeting length, speakers, timestamps, etc.; Kishore 0045-47), the environment state information including work unit records, the work unit records including work information characterizing units of work assigned to the users who are expected to accomplish one or more actions to complete the units of work (Hilleli fig.5A-C shows various action times that are assigned or associated with users; Kishore 0045-47), the work unit records including the first work unit record and the second work unit record (ibid). For claim 10, Hilleli modified by Zhu modified by Kishore discloses the system of claim 9, as described above. Hilleli further discloses: wherein the first work unit record was previously assigned to the user by another user (Kishore 0045-47 contemplates reviewer, CXO (0003-4)). Claims 11-12, 15-20 recite analogous methods and are hence rejected under the same rationale. Claim(s) 3-4, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hilleli (US 11062270 B2) in view of Zhu (US 7213051 B2) in view of Kishore (US 20210209561 A1) in view of Ieperen (US 20030065722 A1). For claim 3, Hilleli modified by Zhu discloses the system of claim 1, as described above. Hilleli modified by Zhu does not disclose the limitations of claim 3: wherein the presentation of the instance of the user interface is limited to client computing platforms associated with users linked to the recorded audio content. Ieperen discloses: wherein the presentation of the instances of the user interface through which the users access the meeting content is limited to the users that are linked to the meeting content (0015-17, fig.5a, 0049, 0046: linking a shared workspace to the meeting, providing access to shared workspace meeting content to participants in the meeting, hence, linked to shared workspace, the combination with Hilleli yielding application to audio content of meetings). It would have been obvious before the effective filing date to one of ordinary skill in the art to modify the system of Hillel by incorporating the collaborative workspace authorization technique of Ieperen. Both concern the art of collaborative workspaces associated with meetings or conferences, and the incorporation would have, according to Ieperen, simplify authentication and authentication management in collaborative environments (0004), provide easy access to collaborative workspaces (0049). For claim 4, Hilleli modified by Zhu modified by Ieperen discloses the system of claim 3, as described above. Ieperen further discloses: wherein the users that are linked to the recorded audio content include one or more of a creator of the recorded audio content (0049, 0045-47: creators and participants), an assignee identified in the first work unit record or the second work unit record, or participants in the recorded audio content (0049, 0045-47: participants in conference). Claims 13-14 recite analogous methods and are hence rejected under the same rationale. Response to Arguments Applicant’s arguments have been fully considered. In the remarks, Applicant argues: 1. Hilleli fig.5C discloses whether character sequences are action items in general, but is unconnected to work items that have already been generated, hence, not action items “previously assigned to the user”. Regarding Hilleli’s non-disclosure, Examiner agrees. The claims recite a first and a second “work unit record … previously assigned to the user”. As the presentation in fig.5A-C are, at most, work units previously assigned, such as via a discussion, by the collaborators, Hilleli does not disclose a user interacting with previously assigned work unit records, which under BRI is understood as data structures storing previously generated work units data managed by the collaboration environment and associated and assigned to the user, as claimed. However, Applicant’s arguments are moot in view of newly mapped art Kishore, as described in the rejection above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Fitzsimmons (US 20150181020 A1) fig.6Bdiscloses annotations for meeting minutes which can be moved, see 196. Zhan (US 20220206673 A1) figs. 14-16 disclose editing annotations and tasks associated with meeting timeline annotations. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIANG LI whose telephone number is (303)297-4263. The examiner can normally be reached Mon-Fri 9-12p, 3-11p MT (11-2p, 5-1a ET). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Jennifer Welch can be reached on (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The examiner is available for interviews Mon-Fri 6-11a, 2-7p MT (8-1p, 4-9p ET). /LIANG LI/ Primary examiner AU 2143
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Sep 28, 2025
Non-Final Rejection — §103, §DP
Dec 19, 2025
Response Filed
Feb 14, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596463
METHOD AND APPARATUS FOR IMAGE-BASED NAVIGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585716
INTELLIGENT RECOMMENDATION METHOD AND APPARATUS, MODEL TRAINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585375
GENERATING SNAPPING GUIDE LINES FROM OBJECTS IN A DESIGNATED REGION
2y 5m to grant Granted Mar 24, 2026
Patent 12580000
MULTITRACK EFFECT VISUALIZATION AND INTERACTION FOR TEXT-BASED VIDEO EDITING
2y 5m to grant Granted Mar 17, 2026
Patent 12561566
NEURAL NETWORK LAYER FOLDING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+69.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 273 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month