Prosecution Insights
Last updated: April 19, 2026
Application No. 19/071,152

ELECTRONIC DEVICE PERFORMING CAMERA CALIBRATION, AND OPERATION METHOD THEREFOR

Non-Final OA §103
Filed
Mar 05, 2025
Examiner
NOH, JAE NAM
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 445 resolved
+27.8% vs TC avg
Minimal -10% lift
Without
With
+-10.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
26 currently pending
Career history
471
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
31.5%
-8.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the application filed on 3/5/2025. Claims 1-18 are pending. Acknowledgment is made of a claim for foreign priority. All of the certified copies of the priority documents have been received. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The references listed on the Information Disclosure Statement submitted on 3/5/2025 has/have been considered by the examiner (see attached PTO-1449). Claim Mapping Notation In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. [0005] (Paragraph number [0005]) C5 (Column 5) Pa5 (Page 5) S5 (Section 5) Furthermore, unless necessary to distinguish from other references in this action, “et al.” will be omitted when referring to the reference. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Seo et al. (US 20220189113 A1) in view of Hall et al. (US 10825197 B2). Regarding the claim 1, Seo discloses the invention substantially as claimed. Seo discloses, 1. An electronic device for performing camera calibration, the electronic device comprising: a communication interface [FIG. 1]; memory [see claim 10] storing one or more computer programs; and one or more processors including processing circuitry and communicatively coupled to the communication interface and the memory, wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: “[0037] As shown in FIG. 1, a three-dimensional skeleton generation method according to the present invention may be implemented as a program system on a computer terminal 30, which is configured to receive multiview depth and color (RGB, etc.) images 60 captured by a distributed camera system 20 to generate a three-dimensional skeleton. In other words, the three-dimensional skeleton generation method may be configured as a program so as to be installed and executed in the computer terminal 30. The program installed in the computer terminal 30 may operate as a single program system 40.” “[0046] Next, an overall configuration of a three-dimensional skeleton generation method using calibration based on a joint acquired from a multiview camera according to one embodiment of the present invention will be described.” obtain, via the communication interface, a first image of a user captured by a first camera, and a second image of the user captured by a second camera, “[0039] Meanwhile, the distributed camera system 20 may include a plurality of color-depth (RGB-D) cameras 21 for capturing an object 10 at different viewpoints.” … … obtain a projection relationship for projecting the 3D joint feature points onto the 2D position coordinate values of the second joint feature points, “[0075] Next, extrinsic calibration for optimizing an extrinsic parameter may be performed by using the joint of the skeleton of each viewpoint (S30).” “[0077] As shown in FIG. 6 or FIGS. 7A-7F, the joint of the skeleton of each viewpoint may be configured as a feature point set. In addition, a feature point having a maximum error may be detected by optimizing the extrinsic parameter with respect to the feature point set, the detected feature point may be excluded from the feature point set, and the optimization may be repeatedly performed with respect to the remaining set. In this case, the repetition may be performed until a size of the feature point set is 1. In addition, the optimized extrinsic parameter in a case where an optimization error is minimum may be acquired as a final extrinsic parameter.” and perform camera calibration by predicting a positional relationship between the first camera and the second camera based on the obtained projection relationship. “[0075] Next, extrinsic calibration for optimizing an extrinsic parameter may be performed by using the joint of the skeleton of each viewpoint (S30).” “[0077] As shown in FIG. 6 or FIGS. 7A-7F, the joint of the skeleton of each viewpoint may be configured as a feature point set. In addition, a feature point having a maximum error may be detected by optimizing the extrinsic parameter with respect to the feature point set, the detected feature point may be excluded from the feature point set, and the optimization may be repeatedly performed with respect to the remaining set. In this case, the repetition may be performed until a size of the feature point set is 1. In addition, the optimized extrinsic parameter in a case where an optimization error is minimum may be acquired as a final extrinsic parameter.” Seo does not disclose, extract first joint feature points, which are two-dimensional (2D) position coordinate values of joints of the user, from the first image, and second joint feature points, which are 2D position coordinate values of the joints, from the second image, obtain three-dimensional (3D) joint feature points of the joints by lifting the extracted first joint feature points to 3D position coordinate values, Hall discloses, extract first joint feature points, which are two-dimensional (2D) position coordinate values of joints of the user, from the first image, and second joint feature points, which are 2D position coordinate values of the joints, from the second image, C6 “Referring back to FIG. 3, key point detection is performed, at processing block 330, for each detected bounding box by detecting and labeling key-points on each person at major joints (e.g., shoulder, hip, knee, neck, etc.). According to one embodiment, a CNN is also implemented to perform this process.” C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” obtain three-dimensional (3D) joint feature points of the joints by lifting the extracted first joint feature points to 3D position coordinate values, C5 “FIG. 2 illustrates one embodiment of a 3D position estimation mechanism 110, including data capture module 201, bounding box detection logic 202, key point detection engine 203, multi-view association module 204, joint triangulation logic 205, model fitting logic 206 and temporal association logic 207. According to one embodiment, data capture module 201 receives images captured from an array of cameras included as one of various I/O sources 104.” C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to utilize the teachings of Hall and apply them on the teachings of Seo to utilize 3D joint points derived from detected joints from 2D images and other calibration related schemes when performing joint based calibration in the system of Seo as taught by Hall. One would have been motivated as Seo suggests using 3D joint points and deriving joints from 2D images as well as other schemes related to calibration would have increased effectiveness of the calibration. Unless stated otherwise, the same explanation for the rationale for the following dependent claims applies as given for the independent claim. 2. The electronic device of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to determine whether a 3D pose of the user consisting of the 3D joint feature points are suitable to apply to the camera calibration, based on a distribution of coordinate values in a z-axis direction among the 3D position coordinate values included in the 3D joint feature points. Seo “[0078] In detail, primary optimization may be performed by using effective joint information acquired from each camera as a feature point. Next, optimization may be performed again by excluding the joint having the maximum error after the previous optimization, and an error for all joints may be recalculated and stored.” Seo “[0080] The above parameters may be calculated by using an optimization algorithm so that a Euclidean square distance of coordinates matched to the parameters may be minimized. A transformation matrix of a coordinate system may include parameters for rotation angles and translation values for each of x, y, and z-axes.” 3. The electronic device of claim 2, further comprising a display, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to, in a case that it is determined that the 3D pose of the user is not suitable to apply to the camera calibration, control the display to display guide information requesting the user to assume a predetermined pose. Seo “[0078] …The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” Seo “[0037] As shown in FIG. 1, a three-dimensional skeleton generation method according to the present invention may be implemented as a program system on a computer terminal 30, which is configured to receive multiview depth and color (RGB, etc.) images 60 captured by a distributed camera system 20 to generate a three-dimensional skeleton. In other words, the three-dimensional skeleton generation method may be configured as a program so as to be installed and executed in the computer terminal 30. The program installed in the computer terminal 30 may operate as a single program system 40.” Displaying user interface in a computer system in conjunction with errors in Seo is understood be inherent by one of ordinary skilled in the art. 4. The electronic device of claim 2, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: obtain a plurality of first image frames captured over a period of time from the first camera and obtain a plurality of second image frames captured over the period of time from the second camera, Hall C5 “Multi-view association module 204 associates sets of key-points in different images associated with a player. Joint triangulation logic 205 triangulates a position of each joint in 3D space for a player. In one embodiment, joint triangulation logic 205 uses the camera projection matrices and the knowledge of pixel-location of the joints across different images to perform the triangulation.” obtain a plurality of 3D joint feature points by lifting a plurality of first joint feature points extracted from each of the plurality of first image frames, Hall C5 “FIG. 2 illustrates one embodiment of a 3D position estimation mechanism 110, including data capture module 201, bounding box detection logic 202, key point detection engine 203, multi-view association module 204, joint triangulation logic 205, model fitting logic 206 and temporal association logic 207. According to one embodiment, data capture module 201 receives images captured from an array of cameras included as one of various I/O sources 104.” Hall C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” identify, among the plurality of first image frames, an image frame with a largest degree of distribution of coordinate values in the z-axis direction among a plurality of 3D position coordinate values of the plurality of 3D joint feature points, Hall C6 “Referring back to FIG. 3, joint triangulation is performed, processing block 350, once the key-points have been corresponded across views. According to one embodiment, the positions of each joint is triangulated using the camera matrices. In such an embodiment, joint triangulation is achieved by minimizing a photo-consistency error of the joints across all images. As a result, a position in the 3D space is found for each joint that minimizes the distance between the image of that point projected into the image plane of each camera and the detected key-point in that camera. Triangulating results in advantages, including: (1) while only two images are needed to triangulate a joint position, the information in all images that contain the joint is utilized, which mitigates errors due to calibration errors and incorrect correspondences; and (2) triangulating each joint independently of the others allows the triangulation to be performed in parallel.” extract the second joint feature points from the second image corresponding to the identified image frame among the plurality of second image frames, and Hall C6 “Referring back to FIG. 3, key point detection is performed, at processing block 330, for each detected bounding box by detecting and labeling key-points on each person at major joints (e.g., shoulder, hip, knee, neck, etc.). According to one embodiment, a CNN is also implemented to perform this process.” Hall C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” perform the camera calibration based on a projection relationship between 3D joint coordinate values obtained from the identified image frame and the second joint feature points extracted from the second image. Seo “[0075] Next, extrinsic calibration for optimizing an extrinsic parameter may be performed by using the joint of the skeleton of each viewpoint (S30).” Seo “[0077] As shown in FIG. 6 or FIGS. 7A-7F, the joint of the skeleton of each viewpoint may be configured as a feature point set. In addition, a feature point having a maximum error may be detected by optimizing the extrinsic parameter with respect to the feature point set, the detected feature point may be excluded from the feature point set, and the optimization may be repeatedly performed with respect to the remaining set. In this case, the repetition may be performed until a size of the feature point set is 1. In addition, the optimized extrinsic parameter in a case where an optimization error is minimum may be acquired as a final extrinsic parameter.” 5. The electronic device of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: obtain 3D position coordinate values of the joints of the user, based on the first joint feature points, the second joint feature points, and the positional relationship between the first camera and the second camera, and estimate a 3D pose of the user based on the obtained 3D position coordinate values of the joints. Hall C6 “Referring back to FIG. 3, joint triangulation is performed, processing block 350, once the key-points have been corresponded across views. According to one embodiment, the positions of each joint is triangulated using the camera matrices. In such an embodiment, joint triangulation is achieved by minimizing a photo-consistency error of the joints across all images. As a result, a position in the 3D space is found for each joint that minimizes the distance between the image of that point projected into the image plane of each camera and the detected key-point in that camera. Triangulating results in advantages, including: (1) while only two images are needed to triangulate a joint position, the information in all images that contain the joint is utilized, which mitigates errors due to calibration errors and incorrect correspondences; and (2) triangulating each joint independently of the others allows the triangulation to be performed in parallel.” 6. The electronic device of claim 5, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device: obtain first position coordinate values and second position coordinate values by reprojecting the 3D position coordinate values onto 2D position coordinate values based on information about calibration between the first camera and the second camera, Seo “[0078] …The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” calculate a difference value between the first position coordinate values obtained as a result of the reprojection and the first joint feature points and Seo “[0078] …The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” Seo “[0080] The above parameters may be calculated by using an optimization algorithm so that a Euclidean square distance of coordinates matched to the parameters may be minimized. A transformation matrix of a coordinate system may include parameters for rotation angles and translation values for each of x, y, and z-axes.” a difference value between the second position coordinate values and the second joint feature points, Seo “[0078] …The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” Seo “[0080] The above parameters may be calculated by using an optimization algorithm so that a Euclidean square distance of coordinates matched to the parameters may be minimized. A transformation matrix of a coordinate system may include parameters for rotation angles and translation values for each of x, y, and z-axes.” compare the calculated difference values with a predetermined threshold, and Seo “[0078] …The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” determine accuracy of the camera calibration based on a result of the comparison. Seo “[0078] In detail, primary optimization may be performed by using effective joint information acquired from each camera as a feature point. Next, optimization may be performed again by excluding the joint having the maximum error after the previous optimization, and an error for all joints may be recalculated and stored.” 7. The electronic device of claim 5, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: measure a bone length between joints from the 3D pose, calculate a difference value by comparing the measured bone length with a bone length of a normal person, and Hall C9 “At processing block 360, kinematic model fitting is performed to capture the position and orientation of each of the major segments of the body. A kinematic model is implemented by imposing constraints on the axes of rotation (e.g., the lower arm may only rotate along the local coordinate frame y axis at the elbow) to reduce the degrees of freedom in the position and orientation. As a result, the model fitting results in a kinematic body model being generated for each participant. FIGS. 4F&4G illustrate embodiments of kinematic model fitting performed on athletes in the region of interest.” determine whether to re-perform the camera calibration based on a result of comparing the calculated difference value with a predetermined threshold. Seo “[0078] In detail, primary optimization may be performed by using effective joint information acquired from each camera as a feature point. Next, optimization may be performed again by excluding the joint having the maximum error after the previous optimization, and an error for all joints may be recalculated and stored. The above process may be repeatedly performed until a difference between a converged value of an optimization error of a previous joint set and a converged value of an optimization error of a next joint set is less than or equal to a predetermined threshold. In other words, the repetition may be stopped when the difference is less than or equal to the predetermined threshold. In addition, the repetition may be stopped, and a camera parameter acquired by the current optimization upon the stopping may be acquired. This process may be repeatedly performed by using information about eight skeletons acquired from all the cameras.” 8. The electronic device of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device: extract, from the first image, a plurality of first joint feature points that are 2D position coordinate values of joints of a plurality of users, and extract, from the second image, a plurality of second joint feature points that are 2D position coordinate values of the joints of the plurality of users, Hall C6 “Referring back to FIG. 3, key point detection is performed, at processing block 330, for each detected bounding box by detecting and labeling key-points on each person at major joints (e.g., shoulder, hip, knee, neck, etc.). According to one embodiment, a CNN is also implemented to perform this process.” Hall C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” obtain a plurality of first 3D joint feature points and a plurality of second 3D joint feature points by lifting the plurality of first joint feature points and the plurality of second joint feature points to 3D position coordinate values, respectively, and Hall C5 “FIG. 2 illustrates one embodiment of a 3D position estimation mechanism 110, including data capture module 201, bounding box detection logic 202, key point detection engine 203, multi-view association module 204, joint triangulation logic 205, model fitting logic 206 and temporal association logic 207. According to one embodiment, data capture module 201 receives images captured from an array of cameras included as one of various I/O sources 104.” Hall C2 “In embodiments, a 3D position estimation mechanism receives a plurality of 2D images captured by a camera array during a live event, locates key-points of human joints of a plurality of athletes included in the images, associates key-points of each athlete across the images, recovers a 3D body position of each of the plurality of athletes based on the associated key-points and generates an animated model of a motion for one or more of the plurality of athletes.” distinguish the plurality of users included in the first image and the second image by matching a first 3D pose consisting of the obtained plurality of first 3D joint feature points with a second 3D pose consisting of the plurality of second 3D joint feature points. Hall C14 “Some embodiments pertain to Example 1 that includes an apparatus to facilitate three dimensional (3D) position estimation, comprising one or more processors to receive a plurality 2D images captured by a camera array during a live event, locate key-points of human joints of a plurality of event participants included in the images, , associate key-points of each participant across the images and recover a 3D body position of each of the plurality of participants based on the associated key-points.” Regarding the claims 9-18, they recite elements that are at least included in the claims 1-8, 1 and 2 above but in a different claim form and/or encoding/decoding counterpart that are reciprocal. Therefore, the same rationale for the rejection of the claims 1-8, 1 and 2 applies. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Claveau et al. (US 9965870 B2) and Wong et al. (US 20210004933 A1) disclose relevant art related to the subject matter of the present invention. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. An extension of time may be obtained under 37 CFR 1.136(a). However, in no event, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE N. NOH whose telephone number is (571) 270-0686. The examiner can normally be reached on Mon-Fri 8:30AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE N NOH/ Primary Examiner Art Unit 2481
Read full office action

Prosecution Timeline

Mar 05, 2025
Application Filed
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604025
METHOD FOR VERIFYING IMAGE DATA ENCODED IN AN ENCODER UNIT
2y 5m to grant Granted Apr 14, 2026
Patent 12593071
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587679
LOW-LATENCY MACHINE LEARNING-BASED STEREO STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12574571
FRAME SELECTION FOR STREAMING APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574529
IMAGE ENCODING AND DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
76%
With Interview (-10.0%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month