Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,881

IMAGE LEARNING MODEL

Non-Final OA §102§103
Filed
Nov 10, 2023
Examiner
ZUBERI, MOHAMMED H
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Netflix Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
306 granted / 437 resolved
+15.0% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
460
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to patent application as filed on 11/10/2023 This action is made Non-Final. Claims 1 – 20 are pending in the case. Claims 1, 13, and 20 are independent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/23/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings filed on 11/10/2023 have been accepted by the Examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 4, 11-13 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Engin et al (Causal Machine Learning by Creative Insights, Netflix Technology Blog, January 11, 2023, 20 pages, from IDS filed 6/23/2025 hereinafter referred to as Engin). The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Claim 1: Engin discloses A computer-implemented method comprising: accessing at least one image associated with a media item (Pg 1: “at Netflix...promotional artwork...represents each title featured on our platform”); identifying an association between the accessed image and an image take fraction that indicates how well the accessed image correlates to views of the associated media item (Pg 3, 4 and 6: “we have rich dataset of promotional artwork components and user engagement data...we represent the success of an artwork with the take rate: the probability of an average user to watch the promoted title after seeing its promotional artwork, adjusted for the popularity of the title...here are two promotional artwork assets from Unbreakable Kimmy Schmidt. We know that the image on the left performed better than the image on the right”); based at least on the identified association between the accessed media item image and the corresponding image take fraction, training a machine learning (ML) model to predict which images will optimally correlate to views of the associated media item(Pg 9-10: “Y: outcome variable (take rate)...W: a vector covariates (a subset of W) along with treatment effect heterogeneity is evaluated...2. Build a potential outcome model to predict Y give the W covariates. Y=q(X,W)+ε”); accessing an unprocessed image associated with a new media item that has not been processed by the trained ML model (Pg 1: “we can give our creative team data-driven insights to incorporate into their creative strategy, and help in their selection of which artwork to feature”); and implementing the trained ML model to predict an image take fraction for the unprocessed image to indicate how well the unprocessed image will correlate to views of the new, unprocessed media item (Pg 14: “using the causal machine learning framework, we can...test and identify the various components of promotional artwork and gain invaluable creative insights...these insights will guide and assist our team of talented strategists and creatives to select and generate the most attractive artwork, leveraging the attributes that these models selected, down to a specific genre”). Claim 2: Engin discloses wherein the ML model is configured to identify one or more patterns in the unprocessed image and match those identified patterns to patterns associated with the accessed image (pg 9-10: Engin’s variable W, representing a vector of covarities, is equivalent to the claimed patterns which are utilized in the ML model). Claim 4: Engin discloses the image take fraction indicates a percentage of views of the associated media item relative to a number of impressions of the accessed image (pg 4-5: “we represent the success of an artwork with the take rate: the probability of an average user to watch the promoted title after seeing its promotional artwork, adjusted for the popularity of the title...we look at user engagement patterns and see whether or not these engagements with artworks resulted in a successful title selection”). Claim 11: Engin discloses tracking, as feedback, how well the unprocessed image correlated to views of the associated media item; and incorporating the feedback in the ML model when accessing future images and predicting future image take fractions (pg 4-5: we represent the success of an artwork with the take rate: the probability of a average user to watch the promoted title after seeing its promotional artwork...we look at user engagement patterns and see whether or not these engagements with artwork resulted in a successful title selection...we also utilize machine learning algorithms...for discovering high-level associations between image features and an artwork’s success). Claim 12: Engin discloses changing an artwork image for at least one media item based on the incorporated feedback (pg 6: we use machine learning algorithms to predict whether or not the artwork contains a face...every unit (an artwork) has some chance of getting treated...we calculate the propensity score...of having a face for samples with different covariates. If a certain subset of artwork...has close to a 0 or 1 propensity score for having a face, then we discard these samples from our analysis). Claim 13: Engrin discloses A system comprising: at least one physical processor; and physical memory comprising computer-executable instructions (title: Engin discusses a process that is machine learning, which inherently involves a computer system) that, when executed by the physical processor, cause the physical processor to: access at least one image associated with a media item (Pg 1: “at Netflix...promotional artwork...represents each title featured on our platform”); identify an association between the accessed image and an image take fraction that indicates how well the accessed image correlates to views of the associated media item (Pg 3, 4 and 6: “we have rich dataset of promotional artwork components and user engagement data...we represent the success of an artwork with the take rate: the probability of an average user to watch the promoted title after seeing its promotional artwork, adjusted for the popularity of the title...here are two promotional artwork assets from Unbreakable Kimmy Schmidt. We know that the image on the left performed better than the image on the right”); based at least on the identified association between the accessed media item image and the corresponding image take fraction, train a machine learning (ML) model to predict which images will optimally correlate to views of the associated media item (Pg 9-10: “Y: outcome variable (take rate)...W: a vector covariates (a subset of W) along with treatment effect heterogeneity is evaluated...2. Build a potential outcome model to predict Y give the W covariates. Y=q(X,W)+ε”); access an unprocessed image associated with a new media item that has not been processed by the trained ML model (Pg 1: “we can give our creative team data-driven insights to incorporate into their creative strategy, and help in their selection of which artwork to feature”); and implement the trained ML model to predict an image take fraction for the unprocessed image to indicate how well the unprocessed image will correlate to views of the new, unprocessed media item (Pg 14: “using the causal machine learning framework, we can...test and identify the various components of promotional artwork and gain invaluable creative insights...these insights will guide and assist our team of talented strategists and creatives to select and generate the most attractive artwork, leveraging the attributes that these models selected, down to a specific genre”). Claim 20: Engrin discloses A non-transitory computer-readable medium comprising one or more computer- executable instructions (Title) that, when executed by at least one processor of a computing device, cause the computing device to: access at least one image associated with a media item (Pg 1: “at Netflix...promotional artwork...represents each title featured on our platform”); identify an association between the accessed image and an image take fraction that indicates how well the accessed image correlates to views of the associated media item (Pg 3, 4 and 6: “we have rich dataset of promotional artwork components and user engagement data...we represent the success of an artwork with the take rate: the probability of an average user to watch the promoted title after seeing its promotional artwork, adjusted for the popularity of the title...here are two promotional artwork assets from Unbreakable Kimmy Schmidt. We know that the image on the left performed better than the image on the right”); based at least on the identified association between the accessed media item image and the corresponding image take fraction, train a machine learning (ML) model to predict which images will optimally correlate to views of the associated media item (Pg 9-10: “Y: outcome variable (take rate)...W: a vector covariates (a subset of W) along with treatment effect heterogeneity is evaluated...2. Build a potential outcome model to predict Y give the W covariates. Y=q(X,W)+ε”); access an unprocessed image associated with a new media item that has not been processed by the trained ML model (Pg 1: “we can give our creative team data-driven insights to incorporate into their creative strategy, and help in their selection of which artwork to feature”); and implement the trained ML model to predict an image take fraction for the unprocessed image to indicate how well the unprocessed image will correlate to views of the new, unprocessed media item (Pg 14: “using the causal machine learning framework, we can...test and identify the various components of promotional artwork and gain invaluable creative insights...these insights will guide and assist our team of talented strategists and creatives to select and generate the most attractive artwork, leveraging the attributes that these models selected, down to a specific genre”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 5, 6, 9, 10, 14-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Engin in view of Madeline et al (AVA: The Art and Science of Image Discovery at Netflix, Netflix Technology Blog, February 2018, 12 pages from IDS dated 6/23/2025 hereinafter referred to as Madeline). Claim 3: Engin discloses every feature of claim 1. Engin, by itself, does not seem to completely teach filtering images that are to be processed by the ML model to ensure that the images are usable by the ML model. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches filtering images that are to be processed by the ML model to ensure that the images are usable by the ML model (Pg 11: Filters for Maturity). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to obtain the benefit of ensuring only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 5: Engin discloses every feature of claim 1. Engin, by itself, does not seem to completely teach the ML model comprises a deep learning model that is configured to analyze a plurality of images and a corresponding plurality of image take fractions to indicate how well the plurality of images correlates to views of the associated media items. The Examiner maintains that these features were previously well-known as taught by Madeline.Madeline teaches the ML model comprises a deep learning model that is configured to analyze a plurality of images and a corresponding plurality of image take fractions to indicate how well the plurality of images correlates to views of the associated media items (pages 8-9: “we outline some of the key elements we use to surface the best images for a given title...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters and deprioritize secondary characters or extras...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames tagged with frame annotation to surface and rank the main actors of a given title without knowing anything about the cast members”). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 6: Engin, by itself, does not seem to completely teach ranking each of the plurality of images based on the predicted image take fractions. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches ranking each of the plurality of images based on the predicted image take fractions (Page 8: Image Ranking). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 9: Engin, by itself, does not seem to completely teach recropped versions of the accessed image result in different image take fractions for the associated media item. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches recropped versions of the accessed image result in different image take fractions for the associated media item (Pg 8: Composition Metadata). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 10: Engin, by itself, does not seem to completely teach the ML model is configured to process the recropped versions of the accessed image as separate images that are each associated with the media item. The Examiner maintains that these features were previously well-known as taught by Madeline.Madeline teaches the ML model is configured to process the recropped versions of the accessed image as separate images that are each associated with the media item (pg 3-5: we first came up with objective signals that we can measure for each and every frame of the video using Frame Annotations. As result, we can collect an effective representation of each frame of the video...every frame of video in a piece of content is processed through a series of computer vision algorithms to gather object frame metadata...as well as some of the contextual metadata that those frame(s) contain). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 14: Engin, by itself, does not seem to completely teach the unprocessed image and other images processed by the ML model are ranked based on the corresponding predicted image take fractions, and wherein a supervised model is implemented to group the ranked images into thematic containers. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches the unprocessed image and other images processed by the ML model are ranked based on the corresponding predicted image take fractions, and wherein a supervised model is implemented to group the ranked images into thematic containers (pg 8-9: the next step is to surface “the best” image candidates from those frames through an automated artwork pipeline...they are automatically provided with a high quality image set to choose from...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames...to surface and rank the main actors of a given title”; an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 15: Engin, by itself, does not seem to completely teach each thematic bucket is assigned a specific number of images that are to be taken from the associated media item and placed in each thematic container. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches each thematic bucket is assigned a specific number of images that are to be taken from the associated media item and placed in each thematic container (pg 8-9: the next step is to surface “the best” image candidates from those frames through an automated artwork pipeline...they are automatically provided with a high quality image set to choose from...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames...to surface and rank the main actors of a given title”; an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 16: Engin, by itself, does not seem to completely teach the thematic containers include containers for at least one of: images with specific characters, images conveying specific genres, images conveying specific storylines, images conveying specific tones, or images conveying a specific type of shot. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches the thematic containers include containers for at least one of: images with specific characters, images conveying specific genres, images conveying specific storylines, images conveying specific tones, or images conveying a specific type of shot (pg 8-9: the next step is to surface “the best” image candidates from those frames through an automated artwork pipeline...they are automatically provided with a high quality image set to choose from...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames...to surface and rank the main actors of a given title”; an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 17: Engin, by itself, does not seem to completely teach at least one of the images belongs to a plurality of different thematic containers. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches at least one of the images belongs to a plurality of different thematic containers (pg 8-9: the next step is to surface “the best” image candidates from those frames through an automated artwork pipeline...they are automatically provided with a high quality image set to choose from...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames...to surface and rank the main actors of a given title”; an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 18: Engin, by itself, does not seem to completely teach the images in each thematic container are ranked based on the image's corresponding image take fraction. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches the images in each thematic container are ranked based on the image's corresponding image take fraction (pg 8-9: the next step is to surface “the best” image candidates from those frames through an automated artwork pipeline...they are automatically provided with a high quality image set to choose from...one way we identify the key character for a given episode is by utilizing a combination of face clustering and actor recognition to prioritize main characters...we trained a deep-learning model to trace facial similarities from all qualifying candidate frames...to surface and rank the main actors of a given title”; an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim 19: Engin, by itself, does not seem to completely teach the images in the thematic containers to at least one user for selection and use with the associated media item. The Examiner maintains that these features were previously well-known as taught by Madeline. Madeline teaches the images in the thematic containers to at least one user for selection and use with the associated media item (pg 8-9: an image shows an example of actor clusters, frame ranking and optimal selection allowing for a user to choose an image). Engin and Madeline are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Madeline before him or her, to combine the teachings of Engin and Madeline. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Madeline to obtain the invention as specified in the instant claim(s). Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Engin in view of Krishnan et al (“Selecting the Best Artwork for Videos Through A/B Testing”, Netflix technology Blog, February 2018, 20 pages hereinafter referred to as Krishnan). Claim 7: Engin discloses every feature of claim 1. Engin, by itself, does not seem to completely teach the image take fraction includes, as a factor, an amount of time spent watching the media item. The Examiner maintains that these features were previously well-known as taught by Krishnan. Krishnan teaches the image take fraction includes, as a factor, an amount of time spent watching the media item (pg 7: amount of time spent watching being included in the take rate is discussed). Engin and Krishnan are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Krishnan before him or her, to combine the teachings of Engin and Krishnan. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Krishnan to obtain the invention as specified in the instant claim(s). Claim 8: Engin, by itself, does not seem to completely teach the image take fraction includes, as a factor, a property associated with the media item. The Examiner maintains that these features were previously well-known as taught by Krishnan. Krishnan teaches the image take fraction includes, as a factor, a property associated with the media item (pg 7: take rate potentially including a variety of associated data is discussed). Engin and Krishnan are analogous art because they are from the same problem-solving area, identifying images to represent media content. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Engin and Krishnan before him or her, to combine the teachings of Engin and Krishnan. The rationale for doing so would have been to ensure only appropriate images are used to represent media content. Therefore, it would have been obvious to combine Engin and Krishnan to obtain the invention as specified in the instant claim(s). Note The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Dec 13, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585923
DESPARSIFIED CONVOLUTION FOR SPARSE ACTIVATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582478
SYSTEMS AND METHODS FOR INTEGRATING INTRAOPERATIVE IMAGE DATA WITH MINIMALLY INVASIVE MEDICAL TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12579650
IMPROVED SPINAL HARDWARE RENDERING
2y 5m to grant Granted Mar 17, 2026
Patent 12567496
METHOD AND APPARATUS FOR DISPLAYING AND ANALYSING MEDICAL SCAN IMAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547819
MODULAR SYSTEMS AND METHODS FOR SELECTIVELY ENABLING CLOUD-BASED ASSISTIVE TECHNOLOGIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
98%
With Interview (+27.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month