Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,449

SYSTEMS AND METHODS FOR PROVIDING FOCUSED CONTENT

Final Rejection §103
Filed
Dec 19, 2023
Examiner
JOHNSON-CALDERON, FRANK J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
77%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
127 granted / 222 resolved
-0.8% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
243
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
67.1%
+27.1% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 222 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 31-50 have been considered but are moot because the arguments do not apply to the new rejection made below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 31-35, 37-38, 40-45, 47-48, 50 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Tidhar et al. (US 20220353435, hereinafter Cohen-Tidhar) in view of Ramaswamy et al. (US 20200014961, hereinafter Ramaswamy) and Hughes et al. (US 20100179867, hereinafter Hughes.) Regarding claim 31, “A method comprising: generating for display a content item” Cohen-Tidhar teaches (¶0010 and ¶0014) systems, devices, and methods (as well as a particular format or protocol, and a suitable algorithm) to enable a viewer of a video, who utilizes an electronic device for video playback or video consumption or video engagement, to perform high-quality zoom-in function (and high-quality zoom-out function), on a specific object that is shown in the video content, and/or on a specific area-of-interest or object-of-interest that is depicted in the video content; (¶0046-¶0047) displaying a frame of the original video; (¶0090, ¶0101, ¶106) computer implemented. As to “determining… that a frame of a plurality of frames of the content item includes a focus object” Cohen-Tidhar teaches (¶0029) A Metadata List Generator 114 generates a Metadata List 115 (or file, or data-item), which is associated with the video, and which indicates the available cropped version(s) that are available for each frame (or time-slot, or time-segment) of the video; and optionally indicating also the in-frame coordinates or location or offset of such cropped versions (e.g., relative to a first horizontal edge and to a first vertical edge of the original video; or otherwise relative to a fixed point or a particular corner of the full uncropped video); (¶0048) objects-of-interest or areas-of-interest that were determined for the original frame; each such area-of-interest being a rectangle having dimensions of 480p (e.g., each sub-frame is 854 by 480 pixels). In some embodiments, a human user may manually create these particular sub-frames, focusing on particular objects or areas As to “receiving a user interface selection to present focus content corresponding to the focus object” Cohen-Tidhar teaches (¶0032) the end-user requests zoom-in. As to “based at least in part on receiving the user interface selection, accessing an index map for the content item, wherein the index map identifies a region in the frame of the plurality of frames of the content item that includes focused content” Cohen-Tidhar teaches (¶0030) Manifest File 117 represents or indicates the addresses and/or metadata of the available video segments; for example, indicating that a full FOV version of 480p is available for the entire 60 one-second segments, and indicating that a first cropped 480p version is available for the entire 60 one-second segments that depict Object-of-Interest 1, and indicating that a second cropped 480p version is available for the entire 60 one-second segments that depict Object-of-Interest 2, and indicating that a third cropped 480p version is available for 14 one-second segments that depict Object-of-Interest 3 (spanning the 25th to the 38th one-second video segments). The Manifest File 117 may indicate the URL or URI for each such video segment, and its relevant metadata. The Manifest File 117 may be stored in the Video Repository 103, as a separate file that is associated with (or linked to) the original video; or as a “sidecar” file that accompanies the original video; or, in some implementations, as metadata within a header of the video file itself; or using a database or a lookup table or pointer(s); or using other suitable methods that associated between a particular video and a particular manifest file. As to “and generating for display, based at least in part on receiving the user interface selection, the focused content by modifying the identified region of the frame.” Cohen-Tidhar teaches (¶0032) the end-user device is playing a video of 60 seconds; at time-point 00:13 (mm:ss), the end-user requests zoom-in; the server immediately starts generating cropped video versions, for one-second time-segments from that time-point and onward; the first cropped video version is available after two seconds of processing, and starts displaying at time-point 00:15; during its playback (from time-point 00:15 to time-point 0016), the next five video-segments are generated as cropped versions, and become available for zoomed-in playback; during the playback of those five video-segments, the next 12 video-segments are generated as cropped versions; and so forth, thereby producing on-the-fly the cropped video-segments in response to a triggering command (a zoom-in command or request) from the end-user device. Similarly, the Metadata List 115 and/or the Manifest File 117 may be constructed and/or updated and/or augmented on-the-fly, in the background while the end-user device is playing the already-generated cropped video segments; thereby providing a zoom-in functionality that is triggered by an initial zoom-in command, and is constructed gradually via background server-side processing while the end-user device displays already-generated cropped (and thus zoomed-in) video segments. Cohen-Tidhar does not teach “wherein each user profile of the plurality of user profiles comprises respective indications related to previous selections of displaying focused content related to the content item” and determining, “based at least in part on the respective indications related to previous selections of displaying focused content” that a frame of a plurality of frames of the content item includes a focus object. However, Ramaswamy teaches (¶0034) the user interface is operative to enable a user to map zoom buttons to selected objects of interest (e.g., the user's favorite athletes); (¶0035) there may be many objects or athletes with available zoomed content streams. In some such embodiments, user preferences or past user viewing history may be used to limit the zoom controls to the athletes or objects that are favorites of the user or are otherwise selected explicitly or implicitly by the user. Objects of interest may be selected, for example, in response to a determination that the user has explicitly indicated a preference to follow those objects, or in response to a determination that the user has followed or zoomed into in those objects of interest in past sessions. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Cohen-Tidhar with using the user preferences/past user viewing history for object determination as taught by Ramaswamy for the benefit of personalizing the end user experience, learning the user preferences which allows for content targeting, and making the user interface simpler to navigate. Cohen-Tidhar and Ramaswamy do not teach “accessing a user profile database comprising a plurality of user profiles.” However, Hughes teaches (¶0052) Database may store user profiles on the users, which may include information for determining the users' preferences; (¶0125) the user profile stored in database for information collected from monitoring the user's interaction with the application. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Cohen-Tidhar and Ramaswamy with the database for storing user preferences/profile as taught by Hughes for the benefit of having preferences stored in an organized data structure that allows for fast searching/retrieval and scalability. Regarding claim 32, “The method of claim 31, wherein the index map is accessed after the content item is generated for display.” Cohen-Tidhar teaches (¶0038) In response to such user command, the Video Playback Unit 151 determines which particular object-of-interest (or, area-of-interest) is the subject of the zoom-in request; and obtains from the Manifest File 117 and/or from the Metadata List 115 the pointer(s) to the subsequent video segments that depict the cropped version of that particular object-of-interest. Regarding claim 33, “The method of claim 31 further comprising generating for display, simultaneously with the content item, an indicator of the identified region of the content item for which the focused content is available.” Cohen-Tidhar teaches (¶0035) Optionally, the Video Playback Unit 151 may visually indicate to the end-user, via a Zoomable Objects Marking/Highlighting Unit 153, the existence and/or the location of particular objects-of-interest that are zoomable (e.g., objects for which a high-quality fine-details zoom-in function is available, throughout the video or throughout particular portions of the video) Regarding claim 34, “The method of claim 31, wherein the index map comprises coordinates of the identified region.” Cohen-Tidhar teaches (¶0076) The metadata file of Code 3 describes a video with two tracked objects: a face, and a handbag. The metadata file informs the video player (on the end-user device) which objects are available for zooming-in at any specific time-point, and further informs about the spatial position (e.g., coordinates) of each such zoomable object within the frame. Regarding claim 35, “The method of claim 31, wherein generating for display the focused content comprises generating for display an enlarged view of the identified region by cropping the content item based on the index map.” Cohen-Tidhar teaches (¶0073) FIG. 3E shows a frame 312 from another video version, prepared by cropping a 480p rectangle from the original 4K video, surrounding the handbag held by the fashion model; this video version tracking the handbag as an object-of-interest; thus enabling the end-user to request a zoom-in, relative to the original video, and such zoom-in causes a switch to playback of this cropped video version. Regarding claim 37, “The method of claim 31, wherein the content item comprises the index map.” Cohen-Tidhar teaches (¶0030) The Manifest File 117 may be stored in the Video Repository 103, as a separate file that is associated with (or linked to) the original video; or as a “sidecar” file that accompanies the original video; or, in some implementations, as metadata within a header of the video file itself; or using a database or a lookup table or pointer(s); or using other suitable methods that associated between a particular video and a particular manifest file. Regarding claim 38, “The method of claim 31, wherein generating for display the focus content by modifying the identified region of the frame comprises overlaying the focus content over the content item.” Cohen Tidhar teaches (¶0093) wherein the visual marking is generated and is displayed as an overlay element on top of the first video stream (V1) during playback of the first video stream. Regarding claim 40, “The method of claim 31 further comprising: determining user behavior based on selections to present focused content; … and generating for display focused content based on the user behavior.” Ramaswamy teaches (¶0034) the user interface is operative to enable a user to map zoom buttons to selected objects of interest (e.g., the user's favorite athletes); (¶0035) there may be many objects or athletes with available zoomed content streams. In some such embodiments, user preferences or past user viewing history may be used to limit the zoom controls to the athletes or objects that are favorites of the user or are otherwise selected explicitly or implicitly by the user. Objects of interest may be selected, for example, in response to a determination that the user has explicitly indicated a preference to follow those objects, or in response to a determination that the user has followed or zoomed into in those objects of interest in past sessions. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Cohen-Tidhar with using the user preferences/past user viewing history for object determination as taught by Ramaswamy for the benefit of personalizing the end user experience, learning the user preferences which allows for content targeting, and making the user interface simpler to navigate. As to “based on the determined user behavior, retrieving a second index map comprising additional information related to the focused content, wherein the second index map is retrieved after a portion of the content item was generated for display” and generating for display focused content based on “the second index map.” Cohen-Tidhar teaches (¶0058-¶0065 and Code 1) the following code portion, denoted Code 1, is a demonstrative example of an HLS manifest, which may be generated and utilized in accordance with some embodiments. For example, an original video is a high-resolution video of a soccer game, accompanied by an audio stream. The manifest includes data representing the single audio stream, and six different versions of video streams from which the end-user device may select one for obtaining and playing; (¶0066-¶0067 and Code 2) each one of the streams points to a detailed segment manifest. Regarding claim 41, its rejection is similar to claim 31. Regarding claim 42, its rejection is similar to claim 32. Regarding claim 43, its rejection is similar to claim 33. Regarding claim 44, its rejection is similar to claim 34. Regarding claim 45, its rejection is similar to claim 35. Regarding claim 47, its rejection is similar to claim 37. Regarding claim 48, its rejection is similar to claim 38. Regarding claim 50, its rejection is similar to claim 40. Claim(s) 36 and 46, is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Tidhar, Ramaswamy, and Hughes in view of Lin (US 20070109324.) Regarding claim 36, Cohen-Tidhar, Ramaswamy, and Hughes not teach “The method of claim 35, wherein generating for display the enlarged view of the identified region comprises upsampling the identified region of the content item to generate the enlarged view.” However, Lin teaches (Fig. 2) Receiving commands, to upscale an area of interest and sending video frame containing the upscaled are to video display for playback. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Cohen-Tidhar, Ramaswamy, and Hughes with the upscaling the video as taught by Lin for the benefit of allowing users to enjoy a higher quality video. Regarding claim 46, its rejection is similar to claim 36. Claim(s) 39 and 49, is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Tidhar in view of Turgut et al. (US 20170332114, hereinafter Turgut.) Regarding claim 39, Cohen-Tidhar, Ramaswamy, and Hughes do not teach “The method of claim 31 further comprising: receiving the content item from a media content source; and receiving the index map from an index map source, wherein the index map source is different than the media content source.” However, Turgut teaches (Fig. 1) a manifest server and CDN with edge servers 120 for distributing content. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the system as taught by Cohen-Tidhar, Ramaswamy, and Hughes to utilize different sources/servers as taught by Turgut for the benefit of optimizing each server for it’s own purpose, for caching efficiency (long term storage of content segments in edge CDN locations, whilst allowing for manifest modification), for security and access control. Regarding claim 49, its rejection is similar to claim 39. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ramaswamy et al. (US 20180270515) – (¶0111) users are able to track an object of interest without going through the highlighting mechanism. As an example, a user could set a preference in their client device that they would like to see a zoom coded view of their favorite player whenever the player is in the camera's field of view. Butcher (US 20110299832) – (¶0043) In order to determine which object within the video content item may be of interest to the viewer, at 70, the client may access a zoom preference of a viewer which indicates object(s) to zoom during video playback. The zoom preference may include zoom preferences determined in any suitable manner. For example, in some embodiments, the zoom preference may include zoom preferences identified by the viewer. For example, a viewer interested in guitar solos may indicate within her zoom preferences to provide a zoomed view of a guitar when it appears within a video content item (e.g., music videos, concerts, etc.). As another example, a viewer having a favorite athlete may indicate within his zoom preferences to provide a zoomed view of that athlete when that athlete appears within a video content item (e.g., a sports game, an interview, a commercial, a cameo appearance within a movie, etc.). Lavie (US 20200077142) – (¶0192) When a specific user initiates playing of the movie, his preferences are obtained. For each scene having multiple versions, the version of the scene that best matches the specific user's preferences is selected. A version of the complete movie, customized to the specific user's preferences, is then assembled. The assembled version includes the selected version of each scene having multiple versions, and the single version of each scene not having multiple versions. The constructed version of the movie is then played to the specific user. This method is illustrated in FIG. 4B by a selection signal 45 including a sequence of selected versions of scenes. The selection signal 45 is sent from system 42, which has obtained the user's preferences, to database 40. Subsequently database 40 transmits a video content item 46, including the sequence of selected versions of scenes, to system 42, and from there to a smart television 47. The preferences of a specific user are obtained by the video distribution system 54 as in the previous methods, and they are used in real-time for automatically selecting the camera whose view best matches the user preferences. For example, if the user preferences indicate that the user prefers zooming in on faces, then when detecting, in real-time, that multiple cameras are simultaneously directed at a person and a specific one of the cameras is showing a close-up of the person's face, the video distribution system 54 automatically provides to the specific user the video originating from that specific camera. Jackson, JR et al. (US 20180199080) – (¶0069) the user generated data (for example related to their favorite player), may be stored at a database in a memory of the system which stores profile information for the particular user, identified by a unique identifier, associated with the user, or in some cases associated with the client terminal. The database may store and keep track of all input information from the particular user, such that a profile may be constructed which includes or facilitates learning of the user's preferences over time Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK J JOHNSON whose telephone number is (571)272-9629. The examiner can normally be reached 9:00AM-3:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian T. Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Frank Johnson/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Dec 19, 2023
Response after Non-Final Action
Jun 07, 2024
Response after Non-Final Action
Jun 26, 2025
Non-Final Rejection — §103
Nov 26, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597262
DETECTING AND IDENTIFYING OBJECTS REPRESENTED IN SENSOR DATA GENERATED BY MULTIPLE SENSOR SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12583386
METHOD FOR DETECTING TARGET PEDESTRIAN AROUND VEHICLE, METHOD FOR MOVING VEHICLE, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12575718
UNIVERSAL ENDOSCOPE ADAPTER
2y 5m to grant Granted Mar 17, 2026
Patent 12574588
Image Selection Using Motion Data
2y 5m to grant Granted Mar 10, 2026
Patent 12573219
DEVICE AND METHOD FOR COUNTING AND IDENTIFICATION OF BACTERIAL COLONIES USING HYPERSPECTRAL IMAGING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
77%
With Interview (+20.0%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 222 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month