Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,931

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Final Rejection §101§103
Filed
Jul 31, 2023
Examiner
O'MALLEY, CONOR AIDAN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
72%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Moderate +6% lift
Without
With
+5.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1- 11 and 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process. This judicial exception is not integrated into a practical application because the generically recited computer elements recited amount to nothing more than simply implementing an abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the various computer elements recited aren’t any more specific than a generic recitation of a processor or memory. The camera on a head mounted display is a camera, and the use of a camera to take pictures is well-known. A head-mounted display is claimed quite broadly and generically as well, and it would constitute mere instructions to apply an exception as it is used as a mere tool and is recited to a high degree of generality. In regards to claim 1, An information processing device comprising: one or more memory devices configured to store instructions; and one or more processors, that upon execution of the instructions, are configured to: acquire data of a plurality of frames of a currently captured moving image, wherein a first subset of the plurality of frames is captured from a first viewpoint different from a second viewpoint from which a second subset of the plurality of frames is captured (A person of ordinary skill in the art can acquire frame data or frames of a captured moving image from multiple perspectives, and this is mere data gathering which is an extra solution activity); cut out a respective image of a specific region from each frame of the plurality of frames arranged in chronological order (A person of ordinary skill in the art can cut an image out of a specific part of an image using scissors or even their hands from frames arranged in a chronological order, and this is an abstract idea, more specifically one of certain methods of organizing human activity), wherein cutting out the respective image of the specific region from each frame of the plurality of frames comprises adjusting a cut-out target region based on predetermined rules with respect to a time axis (A person of ordinary skill in the art can adjust a cut-out target region literally by picking it up and moving it in accordance to some rule regarding time and this is an abstract idea, more specifically one of certain methods of organizing human activity); and analyze the respective image of the specific region to acquire predetermined information, wherein the analyzing comprises converting at least two images of the specific region to central projection images having parallel optical axes (A person’s eyes operate in this manner where both eyes are on parallel optical axes and produce two separate images into one central projection, as this occurs naturally upon any person with two eyes seeing an image, it is directed to an abstract idea and to a process of nature) In regards to claim 2, wherein the instructions are further executable to cause the one or more processors to reciprocate the cut-out target region in a predetermined direction in a frame plane (A person of ordinary skill in the art can move a cut-out piece of a picture back and forth in a predetermined direction on a plane which is a mental process and/or organizing human activity as it is using a physical aid, in this case, the hand of a person of ordinary skill in the art to move a cut-out part of an image). Further, wherein the crop section reciprocates the cut-out target region in a predetermined direction in a frame plane (This is an insignificant application as merely moving an object back and forth along an axis which doesn’t do anything meaningful or significant). In regards to claim 3, wherein the instructions are further executable to cause the one or more processors to store parameters for image correction of each region of a plurality of regions set as the cut-out target region (A person of ordinary skill in the art can store the parameters to correct an image of a plurality of regions, and this is a mental process as a person can remember some form of parameters); change the cut-out target region between the plurality of regions; and correct a cut-out image according to the parameters (A person can choose and change the region in the plurality and correcting as they go is a mental process as choosing is a form of judgement which is a mental process). In regards to claim 4, wherein the instructions are further executable to cause the one or more processors to move the cut-out target region at a constant speed in a predetermined direction in a frame plane (A person of ordinary skill in the art can move a cut-out part of an image at a constant speed, and it is a mental process as choosing a constant speed is a form of judgement, and moving an image is a mental process modified by a simple tool such as the person’s hand). Further, wherein the instructions are further executable to cause the one or more processors to move the cut-out target region at a constant speed in a predetermined direction in a frame plane (This is insignificant extra-solution activity as this is an insignificant application as merely moving a part of an image at one speed or even not moving the cut-out is not significantly more). In regards to claim 5, wherein the instructions are further executable to cause the one or more processors to: change the cut-out target region at intervals of one frame or a predetermined number of frames and cut out the respective image after each change (A person of ordinary skill in the art can make the cut-outs on some kind of time interval, and the choice of what interval is a form of judgment which is a mental process). In regards to claim 6, wherein the instructions are further executable to cause the one or more processors to change a movement speed of the cut-out target region in a frame plane according to a position of the cut-out target region in the frame plane (A person of ordinary skill in the art can change the speed of a region depending upon where it is on the frame plane, and choosing when to change the speed is a form of mental process as it is a judgement). Further, wherein the crop section changes a movement speed of the cut-out target region in a frame plane according to a position of the cut-out target region in the frame plane (This is insignificant extra-solution activity as it is merely changing the speed of movement in relation to a position which is an insignificant application). In regards to claim 7, wherein the instructions are further executable to cause the one or more processors to: identify a region where a predetermined object is predicted to be depicted in the frames based on a threshold value; and decrease the movement speed of the cut-out target region in the identified region (A person of ordinary skill in the art can determine where an object is predicted to be, and further decrease the movement speed of the cut-out in that region. Determining when and to decrease the speed is a form of judgement, and it is a mental process). Further, wherein the instructions are further executable to cause the one or more processors to: identify a region where a predetermined object is highly likely to be depicted in the frames based on a threshold value; and decrease the movement speed of the cut-out target region in the identified region (This is insignificant extra-solution activity as it is merely changing the speed of movement in relation to a position which is an insignificant application). In regards to claim 8, wherein the instructions are further executable to cause the one or more processors to: acquire data of the frames captured by a respective camera mounted on a head-mounted display (A person of ordinary skill in the art could ensure that the data was captured by a camera on a head-mounted display which is a mental process that merely uses a computer as a toll, and further the usage of a camera to capture images is mere data gathering which is insignificant extra-solution activity); and on a basis of a posture of the head-mounted display, identify a region where the predetermined object is predicted to be depicted based at least in part on a comparison between the posture of the head-mounted display and the threshold value (A person of ordinary skill in the art can identify a region where an object of some kind is predicted to be depicted based upon some kind of threshold which is a form of analysis or data gathering which is insignificant extra-solution activity along with being a form of judgement which is a mental process). In regards to claim 9, wherein the instructions are further executable to cause the one or more processors to: acquire data of the frames captured by the respective camera mounted on a head-mounted display (A person of ordinary skill in the art could ensure that the data was captured by a camera on a head-mounted display which is a mental process that merely uses a computer as a toll, and further the usage of a camera to capture images is mere data gathering which is insignificant extra-solution activity); and acquire information regarding a configuration of objects existing around a user wearing the head-mounted display (A person of ordinary skill in the art could also acquire information on the objects that surround a user wearing a head-mounted display which is mere data gathering which is a form of insignificant extra-solution activity). Claims 10 and 11 are similar to 1, and they are similarly rejected as being directed to an abstract idea. In regards to claim 15, wherein the predetermined information acquired by analyzing the respective image of the specific region comprises state information related to a location or a posture of a head-mounted display comprising at least one camera configured to capture a respective subset of the plurality of frames (A person of ordinary skill in the art can determine the location of a head-mounted display or the posture of a person wearing such a display by looking at a specific region of an image). In regards to claims 16 and 17, they are similar to claim 15, and they are rejected similarly. In regards to claim 18, wherein the instructions are further executable to cause the one or more processors to: determine a drawing position of a virtual object based on the predetermined information acquired by analyzing the respective image of the specific region (A person can look at an image and determine that they wish to draw an image in a specific position based on some form of predetermined information by analyzing or looking a region of an image); and display the virtual object on a view screen based on the drawing position by superimposing the virtual object on a particular frame of the plurality of frames (A person can display their drawn object on a screen or with the image itself and superimpose the drawing into the image by drawing it on the image with a pen) In regards to claims 19 and 20, they are similar to claim 18, and they are similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-5, and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Takayuki et al. (US2020090403A1), hereinafter referred to as Takayuki, in view of Ohba et al. (WO 2020170455 A1), hereinafter referred to as Ohba. In regards to claim 1, Takayuki discloses an information processing device comprising: one or more memory devices configured to store instructions; and one or more processors, that upon execution of the instructions, are configured to: acquire data of a plurality of frames of a currently captured moving image, wherein a first subset of the plurality of frames is captured from a first viewpoint different from a second viewpoint from which a second subset of the plurality of frames is captured (Paragraph 23 and paragraph 5 and paragraph 33, Paragraph 23 discloses that the device can include two cameras that would be two separate viewpoints with paragraph 5 further disclosing the usage of cameras and paragraph 33 discloses two processors and a memory unit); cut out a respective image of a specific region from each frame of the plurality of frames arranged in chronological order, wherein cutting out the respective image of the specific region from each frame of the plurality of frames comprises adjusting a cut-out target region based on predetermined rules with respect to a time axis (Paragraphs 54, 9, 66 and 69, The division of an image would read on the cropping of an image, the timeliness of paragraph 9 implies that the process runs in real time which would be within a BRI of chronological order and paragraph 66 and 69 discloses image analysis specifically of a pre-specified region with regards to predetermined rules). However, Takayuki does not explicitly disclose and analyze the respective image of the specific region to acquire predetermined information, wherein the analyzing comprises converting at least two images of the specific region to central projection images having parallel optical axes. However, Ohba does disclose and analyze the respective image of the specific region to acquire predetermined information, wherein the analyzing comprises converting at least two images of the specific region to central projection images having parallel optical axes (Paragraphs 3-6 on page 4 and last paragraph of page 4 and first paragraph of page 5, Paragraphs 3-6 details the parallax between the viewpoints of two eyes that produce images that have parallel optical axes, and these are used to form a stereoscopic image or the central projection as described in the last paragraph of page and first paragraph of page 5). It would have been prima facie obvious to combine the teachings of these two arts as the it would lead to a predictable increase in the accuracy of the depiction. As the cameras are located one for each eye respectively, they are only able to catch so much of the image and would distort the user’s vision. As such, combining the two images into one central projection image would make the system feel more natural for a user as it is more accurate to their own eyesight along with the central projection image being a more accurate depiction of the space. As such, it would be prima facie obvious. In regards to claim 3, Takayuki discloses wherein the instructions are further executable to cause the one or more processors to store parameters for image correction of each region of a plurality of regions set as the cut-out target region (Paragraph 80, Paragraph 80 discloses a memory buffer that holds the images and the output unit would have the parameters for correcting an image stored within it by implication); change the cut-out target region between the plurality of regions; and correct a cut-out image according to the parameters (Paragraph 80, The output unit corrects the distortion of an image using certain parameters). In regards to claim 4, Takayuki discloses wherein the instructions are further executable to cause the one or more processors to move the cut-out target region at a constant speed in a predetermined direction in a frame plane (Paragraph 54, Paragraph 54, Merely creating a region and it not moving would read on the claim as it is currently written. The BRI of a constant speed in a predetermined direction would include a speed of zero for a direction so long as the speed never changes). In regards to claim 5, Takayuki discloses wherein the instructions are further executable to cause the one or more processors to: change the cut-out target region at intervals of one frame or a predetermined number of frames and cut out the respective image after each change (Paragraph 66 and 23, Paragraph 66 discloses determining a new region per frame and paragraph 23 discloses generating a new image every few frames). In regards to claim 9, Takayuki disclose wherein the instructions are further executable to cause the one or more processors to: acquire data of the frames captured by the respective camera mounted on a head-mounted display (Paragraph 36, This covers that the camera is on the head mount display); and acquire information regarding a configuration of objects existing around a user wearing the head-mounted display (Paragraphs 23-25, Describes how the camera is used to correspond with the space around the user). In regards to claims 10 and 11, they are similar to claim 1, and they are similarly rejected. In regards to claim 12, Ohba discloses wherein the first viewpoint is a left viewpoint and the second viewpoint is a right viewpoint, and wherein converting the at least two images of the specific region to the central projection images having the parallel optical axes further comprises (Paragraphs 3-6 on page 4 and last paragraph of page 4 and first paragraph of page 5, Paragraphs 3-6 details the parallax between the viewpoints of two eyes that produce images that have parallel optical axes, and these are used to form a stereoscopic image or the central projection as described in the last paragraph of page and first paragraph of page 5): applying a respective transformation matrix to each image of the at least two images (Fourth new paragraph of page 13, The disclosed conversion matrix is applied to the projection of the head mounted display which implies that this correction matrix is applied to both images before they are combined which is within the BRI of a transformation matrix). In regards to claims 13 and 14, they are similar to claim 12, and they are similarly rejected. In regards to claim 15, Takayuki discloses herein the predetermined information acquired by analyzing the respective image of the specific region comprises state information related to a location or a posture of a head-mounted display comprising at least one camera configured to capture a respective subset of the plurality of frames (Paragraphs 25-26, These paragraphs disclose that images can be used to determine the camera’s position and the posture and movement of the user’s head). In regards to claims 16 and 17, they are similar to claim 15, and they are similarly rejected. In regards to claim 18, Ohba discloses wherein the instructions are further executable to cause the one or more processors to: determine a drawing position of a virtual object based on the predetermined information acquired by analyzing the respective image of the specific region (New paragraphs 2-6 on page 9, Discloses that the drawing in the virtual space can have the position determined via wide grouping of predetermined factors such as pixel rows); and display the virtual object on a view screen based on the drawing position by superimposing the virtual object on a particular frame of the plurality of frames (Third new paragraph of page 3, The paragraph discloses that the video can include a virtual object drawn onto the image). In regards to claims 19 and 20, they are similar to claim 18, and they are similarly rejected. Claims 2 and 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Takayuki et al. (US2020090403A1), hereinafter referred to as Takayuki, in view of Ohba et al. (WO 2020170455 A1), hereinafter referred to as Ohba, as applied to claims 1, 3-5, and 9-20 above, and further in view of McCall (JP2022502749A). In regards to claim 2, Takayuki does not disclose wherein the instructions are further executable to cause the one or more processors to reciprocate the cut-out target region in a predetermined direction in a frame plane. McCall is a relevant piece of prior art as it similarly deals with using wearable headgear to analyze a 3d model. However, McCall does disclose wherein the instructions are further executable to cause the one or more processors to reciprocate the cut-out target region in a predetermined direction in a frame plane (Paragraph 76, McCall discloses reciprocating an image along the z-axis). It would have been prima facie obvious to combine the teachings of the two arts as it would have been simple substitution to put in an image cut-out instead of McCall’s image and have it reciprocate along an axis. One could simply use part of an image to reciprocate rather than a whole image, and the results would have been predictable. As such, it would be prima facie obvious. In regards to claim 6, Takayuki does not disclose the elements of this claim. McCall does disclose wherein the instructions are further executable to cause the one or more processors to change a movement speed of the cut-out target region in a frame plane according to a position of the cut-out target region in the frame plane (Paragraph 76, McCall discloses reciprocating an image along the z-axis. The reciprocation in question implies an acceleration and deceleration that would be within a BRI of changing the speed of the cut-out region, and it would have to do so at particular locations as it is constantly moving back and forth). In regards to claim 7, Takayuki does disclose a region where a predetermined object is predicted to be depicted in the frames (Paragraphs 62 and 65, This shows that it covers the objects/areas that are most likely to bring a user’s attention and crops them). Takayuki does not explicitly disclose based on a threshold value; and decrease the movement speed of the cut-out target region in the identified region. However, McCall does disclose based on a threshold value(Paragraphs 123-124, these disclose that object recognition can utilize thresholding and thresholds); and decrease the movement speed of the cut-out target region in the identified region (Paragraph 76, as the reciprocation of the image would entail acceleration and deceleration, it would have to decrease the speed during deceleration). It would have been prima facie obvious to combine the teachings of the two arts as it would have been obvious to try. Takayuki discloses likely locations, and McCall discloses deceleration and acceleration. As speed is a scalar, so it only accounts for magnitude and not direction, it can only do three things: maintain the current speed, slow down, or speed up. One could easily choose one of those to do over a region of interest which, in this case, is to slow down. As such, it would be prima facie obvious to try. In regards to claim 8, Takayuki discloses wherein the instructions are further executable to cause the one or more processors to: acquire data of the frames captured by a respective camera mounted on a head-mounted display (Paragraph 36, This covers that the camera is on the head mount display); and on a basis of a posture of the head-mounted display, identify a region where the predetermined object is predicted to be depicted based at least in part on a comparison between the posture of the head-mounted display (Paragraphs 47, 62, and 65, This shows that it covers the objects/areas that are most likely to bring a user’s attention and crops them, and paragraph 47 discloses the position and posture of a user’s head factors into the drawing process). McCall discloses the use of thresholds (Paragraphs 117 and 123-124, these disclose that object recognition can utilize thresholding and thresholds with paragraph 117 disclosing that the threshold can be used to determine where a person is predicted to look). Response to Amendment The amendment, entered 11/11/2025, is entered into the record, and it is fully considered. The amendments have overcome all objections, 112(f) interpretations, and the prior 102 rejections. The previous 102 rejections have been replaced with 103 rejections made necessary due to the amendments to the claims. Response to Arguments Applicant’s arguments, focused on 35 U.S.C. 102 and 35 U.S.C 103, with respect to claims 1-11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments, in regards to 35 U.S.C. 101, filed 11/11/2025 have been fully considered but they are not persuasive. Applicant alleges that the frames stated are computer data structures. A frame of a video or an image in a series of images is not inherently a computer data structure. The BRI of a frame of a video or an image does not exclusively limit itself to computer data structures as this can include physical pictures or physical slideshows. Examiner agrees with the concept that a human being cannot modify a computer data structure purely mentally which is why claims 12-14 are not rejected under 35 U.S.C. 101 as they recite a transformation matrix which would be a computer data structure. Further, the newly added language to the independent claims would fall under 35 U.S.C. 101 as it is reciting a feature that is within the BRI of human eyesight. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONOR AIDAN O'MALLEY whose telephone number is (571)272-0226. The examiner can normally be reached Monday - Friday 9:00 am. - 5:00 pm. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 5722729523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CONOR AIDAN. O'MALLEY Examiner Art Unit 2675 /CONOR A O'MALLEY/ Examiner, Art Unit 2675 /ANDREW M MOYER/ Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
Aug 07, 2025
Non-Final Rejection — §101, §103
Oct 15, 2025
Applicant Interview (Telephonic)
Oct 15, 2025
Examiner Interview Summary
Nov 11, 2025
Response Filed
Jan 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573234
BLINK DETECTION IN CABIN USING DYNAMIC VISION SENSOR
2y 5m to grant Granted Mar 10, 2026
Patent 12555254
MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS METHOD, AND NON-TRANSITORY, COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12541866
MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND COMPUTER READABLE MEDIUM THAT ANALYZE A FLUORESCENCE IMAGE FROM PHOSPHOR IN BIOLOGICAL TISSUE
2y 5m to grant Granted Feb 03, 2026
Patent 12536776
TEACHING METHOD AND TRANSFER SYSTEM FOR SUBSTRATE USING THREE-DIMENSIONAL IMAGE DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12488417
PARAMETRIC COMPOSITE IMAGE HARMONIZATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
72%
With Interview (+5.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month