Prosecution Insights
Last updated: April 19, 2026
Application No. 18/965,797

PERSONALIZED ADAPTIVE MEETING PLAYBACK

Non-Final OA §DP
Filed
Dec 02, 2024
Examiner
WENDMAGEGN, GIRUMSEW
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
742 granted / 968 resolved
+18.7% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
984
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
35.1%
-4.9% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 968 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12,198,725. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the patent anticipate the claims of the present application. Patent No.12,198,725 Application No. 18/965,797 Claim1 recites a computer system comprising: at least one processor; and computer memory having computer-readable instructions embodied thereon, that, when executed by the at least one processor, cause the computer system to perform operations comprising: receiving user-meeting data associated with a meeting recording playable at a default playback speed; programmatically determining, for a first segment of the meeting recording, at least one playback data feature based on the user-meeting data; automatically classifying the first segment of the meeting recording into a category based at least on the at least one playback data feature; determining that a word count, per period of time, of a second segment of the meeting recording differs from a word count, per period of time, of the first segment; based at least in part on the category and the difference in the word count between the first segment and the second segment, automatically determining, for the first segment of the meeting recording, an adaptive playback speed that is faster or slower than the default playback speed; time-stretching the first segment of the meeting recording into a time-stretched segment based on the adaptive playback speed; causing at least a portion of the meeting recording to be provided with the time-stretched segment; and generating an updated meeting recording comprising the time-stretched segment. Claim1 recites a computerized method, comprising: accessing user-meeting data associated with a meeting recording playable at a default playback speed; programmatically determining, for a respective segment of the meeting recording, at least one playback data feature based on the user-meeting data; automatically classifying the respective segment of the meeting recording into a category based at least on the at least one playback data feature; determining a ratio between a first word count of a prior segment and a second word count of the respective segment; based at least in part on the category and the ratio, automatically determining, for the respective segment of the meeting recording, an adaptive playback speed that is faster or slower than the default playback speed; time-stretching the respective segment of the meeting recording into a time- stretched segment based on the adaptive playback speed; causing at least a portion of the meeting recording to be provided with the time- stretched segment; and generating a data file comprising an updated meeting recording comprising the time-stretched segment. Claim2 recites the system of claim 1, wherein the first segment occurs prior to or after the second segment in the meeting recording, wherein determining the adaptive playback speed comprises: determining a ratio between the word count of the first segment and the word count of the second segment, wherein the ratio is applied to the first segment so that the word count, per period of time, of the first segment and the second segment are substantially equal. Claim2 recites the computerized method of claim 1, wherein time-stretching the respective segment further comprises applying the ratio to the respective segment so that the first word count and the second word count are substantially equal per the period of time,, Claim1 recites…; determining that a word count, per period of time, of a second segment of the meeting recording differs from a word count, per period of time, of the first segment; based at least in part on the category and the difference in the word count between the first segment and the second segment, automatically determining, for the first segment of the meeting recording, an adaptive playback speed that is faster or slower than the default playback speed. Claim3 recites the computerized method of claim 1, further comprising determining that the first word count, per period of time, differs from the second word count, per the period of time, wherein the adaptive playback speed is further determined based on the difference in the first word count and the second word count, per the period of time. Claim3 recites the system of claim 1, wherein each weight defines an adaptive playback speed to which a corresponding segment of a plurality of segments is time-stretched, wherein the updated meeting recording is playable at a plurality of playback speeds corresponding to the plurality of segments, at least two of the plurality of playback speeds being different from each other. Claim4 recites the computerized method of claim 1, wherein each weight defines a corresponding adaptive playback speed to which a corresponding segment of a plurality of segments is time-stretched, wherein the updated meeting recording is playable at a plurality of playback speeds corresponding to the plurality of segments, at least two of the plurality of playback speeds being different from each other. Claim4 recites the system of claim 1, wherein the adaptive playback speed is determined based on visual content associated with the meeting recording, wherein the first segment of the meeting recording is time-stretched to coordinate audio of the meeting recording with the visual content. Claim5 recites the computerized method of claim 1, wherein the adaptive playback speed is further determined based on visual content associated with the meeting recording, wherein the respective segment of the meeting recording is time-stretched to coordinate audio of the meeting recording with the visual content. Claim5 recites the system of claim 1, wherein the operations comprise determining at least one of the first segment or the second segment of the meeting recording by: determining a contiguous portion of the meeting recording having a common playback data feature, the common data playback feature comprising an indication of: a speaker, a topic, an audio content, a visual content, an application that is presented, or a meeting attendees screen that is presented; determining a start time of the contiguous portion of the meeting recording that corresponds to a first change of the common data playback feature; determining an end time of the contiguous portion of the meeting recording that corresponds to a second change of the common data playback feature; and determining at least one of the first segment or the second segment of the meeting recording as the contiguous portion of the meeting recording from the start time to the end time. Claim6 recites the computerized method of claim 1, further comprising determining at least one of the respective segment or the prior segment of the meeting recording by: determining a contiguous portion of the meeting recording having a common playback data feature, the common data playback feature comprising an indication of: a speaker, a topic, an audio content, a visual content, an application that is presented, or a meeting attendees screen that is presented; determining a start time of the contiguous portion of the meeting recording that corresponds to a first change of the common data playback feature; determining an end time of the contiguous portion of the meeting recording that corresponds to a second change of the common data playback feature; and determining at least one of the respective segment or the prior segment of the meeting recording as the contiguous portion of the meeting recording from the start time to the end time. Claim6 recites the system of claim 1, wherein time-stretching the first segment comprises adjusting a playback speed of the first segment of the meeting recording from the default playback speed to the adaptive playback speed while maintaining a pitch of the first segment of the meeting recording. Claim7 recites the computerized method of claim 1, wherein time-stretching the respective segment comprises adjusting a playback speed of the respective segment of the meeting recording from the default playback speed to the adaptive playback speed while maintaining a pitch of the respective segment of the meeting recording. Claim7 recites the system of claim 1, wherein determining the at least one playback data feature comprises: detecting a user input indicative of accessing a previous portion of the meeting recording; determining that recency of the meeting recording being played or the meeting recording being accessed is within a recency threshold of time; and based on the recency being within the recency threshold of time, determining that a topic of the previous portion of the meeting corresponds to the at least one playback data feature, wherein the first segment is time-stretched based on the topic corresponding to the at least one playback data feature. Claim8 recites the computerized method of claim 1, wherein determining the at least one playback data feature comprises: detecting a user input indicative of accessing a previous portion of the meeting recording; determining that recency of the meeting recording being played or the meeting recording being accessed is within a recency threshold of time; and based on the recency being within the recency threshold of time, determining that a topic of the previous portion of the meeting corresponds to the at least one playback data feature, wherein the respective segment is time-stretched based on the topic corresponding to the at least one playback data feature. Claim8 recites the system of claim 1, wherein the operations comprise determining at least one of the first segment or the second segment of the meeting recording by: determining a change in sound parameters, of audio of the meeting recording, corresponding to a start time; determining whether the change in the sound parameters corresponds to an utterance or a gap; determining another change in the sound parameters, of the meeting recording, corresponding to an end time, wherein the utterance or the gap has a duration defined between the start time and end time; determining that at least one of the first segment or the second segment corresponds to the utterance or the gap; and classifying at least one of the first segment or the second segment based on whether at least one of the first segment or the second segment corresponds to the utterance or the gap, wherein at least one of the first segment or the second segment is time-stretched based on the classification. Claim9 recites the computerized method of claim 1, further comprising determining at least one of the respective segment or the prior segment of the meeting recording by: determining a change in sound parameters, of audio of the meeting recording, corresponding to a start time; determining whether the change in the sound parameters corresponds to an utterance or a gap; determining another change in the sound parameters, of the meeting recording, corresponding to an end time, wherein the utterance or the gap has a duration defined between the start time and the end time; determining that at least one of the respective segment or the prior segment corresponds to the utterance or the gap; and classifying at least one of the respective segment or the prior segment based on whether at least one of the respective segment or the prior segment corresponds to the utterance or the gap, wherein at least one of the respective segment or the prior segment is time-stretched based on the classification. Claim9 recites the system of claim 1, wherein the at least one playback data feature comprises: a user feature specific to a particular user; and a content feature specific to content of the meeting recording. Claim10 recites the computerized method of claim 1, wherein the at least one playback data feature comprises: a user feature specific to a particular user; and a content feature specific to content of the meeting recording. Claim10 recites the system of claim 1, wherein the at least one playback data feature comprises at least one of a topic of the meeting recording, a type of meeting recording, an identity of a speaker in the meeting recording, a relationship of the speaker to a viewer, a duration of the meeting recording, a duration of pauses in the meeting recording, a transition from a first speaker to a second speaker different from the first speaker, a timing constraint associated with a calendar application, a rate of words per period of time, visual feedback indicative of a level of user engagement with the meeting recording from a wearable device, or contextual metadata expressed as data features indicative of meeting invitees, meeting attendees, or a type of meeting. Claim11 recites the computerized method of claim 1, wherein the at least one playback data feature comprises at least one of a topic of the meeting recording, a type of meeting recording, an identity of a speaker in the meeting recording, a relationship of the speaker to a viewer, a duration of the meeting recording, a duration of pauses in the meeting recording, a transition from a first speaker to a second speaker different from the first speaker, a timing constraint associated with a calendar application, a rate of words per period of time, visual feedback indicative of a level of user engagement with the meeting recording from a wearable device, or contextual metadata expressed as data features indicative of meeting invitees, meeting attendees, or a type of meeting. Claim11 recites a computerized method, comprising: receiving, from a computing device, a request to access a meeting recording; accessing the meeting recording comprising a plurality of time-stretched segments presentable at a corresponding adaptive playback speed and that have been generated based on at least one playback data feature from user-meeting data associated with the meeting recording, wherein the corresponding adaptive playback speed is determined automatically based on visual content associated with the meeting recording and based on a determination that a word count, per period of time, of a first segment of the meeting recording differs from a word count, per period of time, of a second segment, wherein the first segment of the meeting recording is time-stretched to coordinate audio of the meeting recording with the visual content; subsequent to receiving the request, presenting a graphical user interface (GUI) comprising a stream region and a playback timeline region separate from the stream region, the playback timeline region comprising an indication corresponding to each time-stretched segment of the plurality of time-stretched segments of the meeting recording; and presenting, on the computing device, the meeting recording in the stream region based on the plurality of time-stretched segments and the corresponding adaptive playback speed. Claim12 recites a computer storage media having computer-executable instructions embodied thereon, that, when executed by at least one computer processor, cause computing operations to be performed, the operations comprising: receiving, from a computing device, a request to access a meeting recording; accessing the meeting recording comprising a plurality of time-stretched segments presentable at a corresponding adaptive playback speed and that have been generated based on at least one playback data feature from user-meeting data associated with the meeting recording, wherein the corresponding adaptive playback speed is determined based on visual content associated with the meeting recording and based on a ratio between a first word count of a prior segment and a second word count of a respective segment , wherein the respective segment is time-stretched to the corresponding adaptive playback speed; subsequent to receiving the request, presenting a graphical user interface (GUI) comprising a stream region and a playback timeline region separate from the stream region, the playback timeline region comprising an indication corresponding to each time-stretched segment of the plurality of time-stretched segments of the meeting recording; and presenting, on the computing device, the meeting recording in the stream region based on the plurality of time-stretched segments and the corresponding adaptive playback speed. Claim11 recites…; wherein the corresponding adaptive playback speed is determined automatically based on visual content associated with the meeting recording and based on a determination that a word count, per period of time, of a first segment of the meeting recording differs from a word count, per period of time, of a second segment. Claim13 recites the computer storage media of claim 12, wherein the corresponding adaptive playback speed is further determined based on a determination that the first word count differs from the second word count per period of time. Claim11 recites…; wherein the corresponding adaptive playback speed is determined automatically based on visual content associated with the meeting recording and based on a determination that a word count, per period of time, of a first segment of the meeting recording differs from a word count, per period of time, of a second segment, wherein the first segment of the meeting recording is time-stretched to coordinate audio of the meeting recording with the visual content. Claim14 recites the computer storage media of claim 13, wherein the corresponding adaptive playback speed is applied to the respective segment so that the word count, per the period of time, of the time-stretched respective segment and the prior segment are substantially equal, and wherein the respective segment of the meeting recording is time-stretched to the adaptive playback speed to coordinate audio of the meeting recording with the visual content. Claim12 recites the computerized method of claim 11, comprising: receiving a first input indicative of hovering over a segment of the meeting recording; in response to receiving the first input, causing presentation of a window comprising a plurality of playback speed options; and receiving a second input indicative of selecting a playback speed option, wherein the corresponding adaptive playback speed for the first segment changes to correspond to the playback speed of the selected playback speed option. Claim15 recites the computer storage media of claim 12, wherein the operations further comprise: receiving a first input indicative of hovering over a segment of the meeting recording; in response to receiving the first input, causing presentation of a window comprising a plurality of playback speed options; and receiving a second input indicative of selecting a playback speed option, wherein the corresponding adaptive playback speed for the respective segment changes to correspond to the playback speed of the selected playback speed option. Claim13 recites the computerized method of claim 11, wherein the plurality of time-stretched segments are visually distinct, on the playback timeline region, from a plurality of default segments that are not time-stretched. Claim16 recites the computer storage media of claim 12, wherein the plurality of time-stretched segments are visually distinct, on the playback timeline region, from a plurality of default segments that are not time-stretched. Claim14 recites the computerized method of claim 11, comprising determining a weight for each time-stretched segment of a plurality of time-stretched segments of the meeting recording, wherein each weight defines a corresponding time-stretching for a corresponding time-stretched segment of the plurality of time-stretched segments, wherein the updated meeting recording is playable based on the weight for each time-stretched segment. Claim17 recites the computer storage media of claim 12, wherein the operations further comprise determining a weight for each time-stretched segment of a plurality of time-stretched segments of the meeting recording, wherein each weight defines a corresponding time- stretching for a corresponding time-stretched segment of the plurality of time-stretched segments, wherein the updated meeting recording is playable based on the weight for each time-stretched segment. Claim15 recites the computerized method of claim 11, further comprising traversing a progress indication along the playback timeline region based on the meeting recording being played. Claim18 recites the computer storage media of claim 12, wherein the operations further comprise traversing a progress indication along the playback timeline region based on the meeting recording being played. Claim1 recites a computer system comprising: at least one processor; and computer memory having computer-readable instructions embodied thereon, that, when executed by the at least one processor, cause the computer system to perform operations comprising: receiving user-meeting data associated with a meeting recording playable at a default playback speed; programmatically determining, for a first segment of the meeting recording, at least one playback data feature based on the user-meeting data; automatically classifying the first segment of the meeting recording into a category based at least on the at least one playback data feature; determining that a word count, per period of time, of a second segment of the meeting recording differs from a word count, per period of time, of the first segment; based at least in part on the category and the difference in the word count between the first segment and the second segment, automatically determining, for the first segment of the meeting recording, an adaptive playback speed that is faster or slower than the default playback speed; time-stretching the first segment of the meeting recording into a time-stretched segment based on the adaptive playback speed; causing at least a portion of the meeting recording to be provided with the time-stretched segment; and generating an updated meeting recording comprising the time-stretched segment. Claim7 recites the system of claim 1, wherein determining the at least one playback data feature comprises: detecting a user input indicative of accessing a previous portion of the meeting recording; determining that recency of the meeting recording being played or the meeting recording being accessed is within a recency threshold of time; and based on the recency being within the recency threshold of time, determining that a topic of the previous portion of the meeting corresponds to the at least one playback data feature, wherein the first segment is time-stretched based on the topic corresponding to the at least one playback data feature. Claim19 recites a computer system comprising: at least one processor; and computer memory having computer-readable instructions embodied thereon, that, when executed by the at least one processor, cause the computer system to perform operations comprising: receiving user-meeting data associated with a meeting recording playable at a default playback speed; dividing the meeting recording into a plurality of segments; determining, based on the user-meeting data and for at least one segment of the plurality of segments, at least one playback data feature by: detecting a user input indicative of accessing a previous portion of the meeting recording; determining that a recency of the meeting recording being played or the meeting recording being accessed is within a recency threshold of time; and based on the recency being within the recency threshold of time, determining that a topic of the previous portion of the meeting corresponds to the at least one playback data feature; based at least in part on the at least one playback data feature, determining, for the at least one segment of the plurality of segments, a corresponding adaptive playback speed that is faster or slower than the default playback speed; time-stretching the at least one segment of the plurality of segments into a time- stretched segment based on the corresponding adaptive playback speed; and causing the meeting recording to be provided with the time-stretched segment playable at the corresponding adaptive playback speed. Claim2 recites the system of claim 1, wherein the first segment occurs prior to or after the second segment in the meeting recording, wherein determining the adaptive playback speed comprises: determining a ratio between the word count of the first segment and the word count of the second segment, wherein the ratio is applied to the first segment so that the word count, per period of time, of the first segment and the second segment are substantially equal. Claim20 recites the computer system of claim 19, wherein the operations further comprise determining a ratio between a first word count of a prior segment and a second word count of the at least one segment, wherein the ratio is applied to the at least one segment so that the word count, per period of time, of the time-stretched segment and the prior segment is substantially equal, wherein the corresponding adaptive playback speed is further determined based on the ratio. Allowable Subject Matter Claims1-20 would be allowable if applicant overcomes the applied non-statutory double patenting rejection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIRUMSEW WENDMAGEGN whose telephone number is (571)270-1118. The examiner can normally be reached 9:00-7:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. GIRUMSEW WENDMAGEGN Primary Examiner Art Unit 2484 /GIRUMSEW WENDMAGEGN/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Dec 02, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604086
TELESCOPE WITH AT LEAST ONE VIEWING CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602939
INSPECTION SYSTEM AND INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604068
SELECTIVE PLAYBACK OF AUDIO AT NORMAL SPEED DURING TRICK PLAY OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597445
COLLABORATIVE ENHANCEMENT OF VOLUMETRIC VIDEO WITH A DEVICE HAVING MULTIPLE CAMERAS
2y 5m to grant Granted Apr 07, 2026
Patent 12598319
METHODS AND SYSTEMS FOR STORING AERIAL IMAGES ON A DATA STORAGE DEVICE
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
98%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 968 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month