Prosecution Insights
Last updated: April 19, 2026
Application No. 18/527,955

ARTIFICIAL INTELLIGENCE DEEP LEARNING FOR CONTROLLING ALIASING ARTIFACTS

Non-Final OA §102
Filed
Dec 04, 2023
Examiner
ZHENG, JACKY X
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
667 granted / 837 resolved
+17.7% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
21 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
49.9%
+9.9% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 837 resolved cases

Office Action

§102
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is an initial office action in response to communication(s) filed on December 4, 2023. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 4, 2023 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 8-9 and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lung et al. (U.S. Pub. No. 2023/0153958 A1, hereinafter “Lung”). With regard to claim 1, the claim is drawn to a method (see Lung, i.e. fig. 4-6, 7 and etc., disclose the method for processing images or frames with aliasing artifacts) comprising: receiving a degraded image comprising aliasing artifacts (see Lung, i.e. in para. 23 and etc., disclose that “[0023] The motion estimation circuit 310 can receive a plurality of successive images or frames including at least a current frame and a previous frame. For example, the current frame and the previous frame can be a streaming of video frames, which may be low-resolution and have aliased quality, from a cloud source via Internet. As another example, the current frame and the previous frame can be game frames that are generated by a processor, e.g., a GPU, of a mobile phone…”); inputting the degraded image to an image enhancement network (see Lung, i.e. see fig. 3-4, in para. 22 and etc., disclose that “[0022] FIG. 3 shows a functional block diagram of an exemplary device 300 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The device 300 can enhance resolution (e.g., via a super-resolution technique) of a current frame and remove aliasing artifacts of the enhanced-resolution current frame by processing either the current frame only or the current frame and a previous frame that has been warped and aligned with the current frame, in order to retain the information contained in the current frame as much as possible. For example, the device 300 can include a motion estimation circuit 310, a warping circuit 320, and a temporal decision circuit 330.”; ); processing, using the image enhancement network, the degraded image to remove one or more of the aliasing artifacts (see Lung, i.e. 3-4, para. 28 and etc., discloses that “[0028] FIG. 4 shows a functional block diagram of an exemplary frame processor 400 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The frame processor 400 can be coupled to the temporal decision circuit 330 or to the frame fusion circuit 340 of the device 300. The frame processor 400 can include an attention reference frame generator 430 and an artificial intelligence (AI) neural network (NN) 440 coupled to the attention reference frame generator 430. The attention reference frame generator 430 can generate an attention reference frame based on a first high-resolution frame with aliasing artifacts and a second high-resolution frame with aliasing artifacts removed. For example, the attention reference frame generator 430 can compare the first frame and the second frame to capture key information of the first frame that is distinguishable from the second frame. The AI NN 440 can remove aliasing artifacts of another frame, e.g., a low-resolution frame, based on the attention reference frame. For example, the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”) ; and outputting, by the image enhancement network, a restored high-quality image (see Lung, i.e. in fig. 3-4, para. 28 and etc., disclose that “… the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”). With regard to claim 2, the claim is drawn to the method of Claim 1, wherein the image enhancement network is trained by: obtaining a high-quality image of an environment (see Lung, i.e. in fig. 7, step 710, para. 37 and etc., disclose that “[0037] At step 710, a first high-resolution frame with aliasing artifacts and a second high-resolution frame without aliasing artifacts removed are received.”); and generating at least one degraded image of the environment by performing an aliasing artifact simulation on the obtained high-quality image (see Lung, i.e. in fig. 7, step 730, para. 39, disclose that “[0039] At step 730, an AI NN can be trained with a low-resolution frame and the attention reference frame…”), wherein performing the aliasing artifact simulation comprises at least one of: performing a broken line artifact simulation to introduce one or more broken line artifacts on one or more objects in the environment of the high-quality image; and performing a jaggy artifact simulation to introduce jaggy edges to one or more other objects in the environment of the high-quality image (see Lung, i.e. in para. 19 and etc., disclose that “[0019] Anti-aliasing is a technique to solve the jaggies issue by oversampling an image at a rate higher than an intended final output and thus smoothing out the jagged edges of the image. For example, multisample anti-aliasing (MSAA), one of a variety of supersampling anti-aliasing (SSAA) algorithms proposed to address the aliasing occurring at the edges of the triangle 110, can simulate each pixel of a display as having a plurality of subpixels and determine the color of the pixel based on the number of the subpixels that are covered by an object image. FIG. 2 illustrates how the exemplary rectangle 110 can be displayed on the low-resolution raster display 100 with MSAA applied according to some embodiments of the disclosure. MSAA can simulate each pixel 120 as having 2×2 subpixels 220, each of which has a subsample point 230, and determine the color of the pixel 120 based on the number of the subsample points 230 that are covered by the rectangle 110. For example, when no subsample point 230A is covered by the triangle 110, no fragment will be generated for a pixel 120A with the sample points 230 and the pixel 120A is blank; when only one subsample point 230B is covered by the triangle 110, a pixel 120B with the sample points 230B will have a light color, e.g., one fourth of the color of the rectangle 110, which can be estimated by a fragment shader; when only two subsample points 230C are covered by the triangle 110, a pixel 120C with the sample points 230C will have a darker color than the pixel 120B, e.g., one half of the color of the rectangle 110; when as many as three subsample points 230D are covered by the triangle 110, a pixel 120D with the sample points 230D will have a darker color than the pixel 120C, e.g., three fourths of the color of the rectangle 110; when all of subsample points 230E are covered by the triangle 110, a pixel 120E with the sample points 230E will be have the darkest color the same as the pixel 120B shown in FIG. 1. The triangle 110 thus rendered on the display 100 with MSAA applied is shown having smoother edges than the triangle 110 rendered on the display 100 of FIG. 1 without MSAA applied…”) With regard to claim 8, the claim is drawn to an electronic device (see Lung, i.e. in fig. 3, para. 27, disclose the device 300) comprising: at least one processing device (see Lung, i.e. in para. 27, discloses that “[0027] As shown in FIG. 3, the device 300 can further include a frame processor 350. The frame processor 350 can be coupled to the frame fusion circuit 340 and process frames output from the frame fusion circuit 340, which can be the current frame, the current frame concatenated with the warped previous frame, or the single frame. For example, the frame processor 350 can resize or enhance resolution of the current frame and remove aliasing artifacts of the current frame with its resolution enhanced. In an embodiment, the frame fusion circuit 340 can be omitted, and the frame processor 350 can be coupled to the temporal decision circuit 330 directly and process either the current frame or the current frame and the warped previous frame. As the warped previous frame can also be generated by the warping circuit 320 and be output to the frame processor 350 when the warped previous frame is consistent with the current frame, the frame processor 350 can enhance the resolution of the current frame and remove the aliasing artifacts of the enhanced-resolution current frame by further taking the warped previous frame into consideration. In such a scenario, less information of the processed current frame will be lost, as compared with the current frame that is processed by considering the current frame only…”) configured to: receive a degraded image comprising aliasing artifacts (see Lung, i.e. in para. 23 and etc., disclose that “[0023] The motion estimation circuit 310 can receive a plurality of successive images or frames including at least a current frame and a previous frame. For example, the current frame and the previous frame can be a streaming of video frames, which may be low-resolution and have aliased quality, from a cloud source via Internet. As another example, the current frame and the previous frame can be game frames that are generated by a processor, e.g., a GPU, of a mobile phone…”); input the degraded image to an image enhancement network (see Lung, i.e. see fig. 3-4, in para. 22 and etc., disclose that “[0022] FIG. 3 shows a functional block diagram of an exemplary device 300 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The device 300 can enhance resolution (e.g., via a super-resolution technique) of a current frame and remove aliasing artifacts of the enhanced-resolution current frame by processing either the current frame only or the current frame and a previous frame that has been warped and aligned with the current frame, in order to retain the information contained in the current frame as much as possible. For example, the device 300 can include a motion estimation circuit 310, a warping circuit 320, and a temporal decision circuit 330.”; ); process, using the image enhancement network, the degraded image to remove one or more of the aliasing artifacts (see Lung, i.e. 3-4, para. 28 and etc., discloses that “[0028] FIG. 4 shows a functional block diagram of an exemplary frame processor 400 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The frame processor 400 can be coupled to the temporal decision circuit 330 or to the frame fusion circuit 340 of the device 300. The frame processor 400 can include an attention reference frame generator 430 and an artificial intelligence (AI) neural network (NN) 440 coupled to the attention reference frame generator 430. The attention reference frame generator 430 can generate an attention reference frame based on a first high-resolution frame with aliasing artifacts and a second high-resolution frame with aliasing artifacts removed. For example, the attention reference frame generator 430 can compare the first frame and the second frame to capture key information of the first frame that is distinguishable from the second frame. The AI NN 440 can remove aliasing artifacts of another frame, e.g., a low-resolution frame, based on the attention reference frame. For example, the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”) ; and output, by the image enhancement network, a restored high-quality image (see Lung, i.e. in fig. 3-4, para. 28 and etc., disclose that “… the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”). With regard to claim 9, the claim is drawn to the electronic device of Claim 8, wherein, to train the image enhancement network, the at least one processing device is configured to: obtain a high-quality image of an environment (see Lung, i.e. in fig. 7, step 710, para. 37 and etc., disclose that “[0037] At step 710, a first high-resolution frame with aliasing artifacts and a second high-resolution frame without aliasing artifacts removed are received.”); and perform an aliasing artifact simulation on the obtained high-quality image in order to generate at least one degraded image of the environment (see Lung, i.e. in fig. 7, step 730, para. 39, disclose that “[0039] At step 730, an AI NN can be trained with a low-resolution frame and the attention reference frame…”), wherein, to perform the aliasing artifact simulation, the at least one processing device is configured to at least one of: perform a broken line artifact simulation to introduce one or more broken line artifacts on one or more objects in the environment of the high-quality image; and perform a jaggy artifact simulation to introduce jaggy edges to one or more other objects in the environment of the high-quality image (see Lung, i.e. in para. 19 and etc., disclose that “[0019] Anti-aliasing is a technique to solve the jaggies issue by oversampling an image at a rate higher than an intended final output and thus smoothing out the jagged edges of the image. For example, multisample anti-aliasing (MSAA), one of a variety of supersampling anti-aliasing (SSAA) algorithms proposed to address the aliasing occurring at the edges of the triangle 110, can simulate each pixel of a display as having a plurality of subpixels and determine the color of the pixel based on the number of the subpixels that are covered by an object image. FIG. 2 illustrates how the exemplary rectangle 110 can be displayed on the low-resolution raster display 100 with MSAA applied according to some embodiments of the disclosure. MSAA can simulate each pixel 120 as having 2×2 subpixels 220, each of which has a subsample point 230, and determine the color of the pixel 120 based on the number of the subsample points 230 that are covered by the rectangle 110. For example, when no subsample point 230A is covered by the triangle 110, no fragment will be generated for a pixel 120A with the sample points 230 and the pixel 120A is blank; when only one subsample point 230B is covered by the triangle 110, a pixel 120B with the sample points 230B will have a light color, e.g., one fourth of the color of the rectangle 110, which can be estimated by a fragment shader; when only two subsample points 230C are covered by the triangle 110, a pixel 120C with the sample points 230C will have a darker color than the pixel 120B, e.g., one half of the color of the rectangle 110; when as many as three subsample points 230D are covered by the triangle 110, a pixel 120D with the sample points 230D will have a darker color than the pixel 120C, e.g., three fourths of the color of the rectangle 110; when all of subsample points 230E are covered by the triangle 110, a pixel 120E with the sample points 230E will be have the darkest color the same as the pixel 120B shown in FIG. 1. The triangle 110 thus rendered on the display 100 with MSAA applied is shown having smoother edges than the triangle 110 rendered on the display 100 of FIG. 1 without MSAA applied…”).t With regard to claim 15, the claim is drawn to a non-transitory machine readable medium containing instructions that when executed cause at least one processor of an electronic device (in addition to the discussion of claim 8, further in Lung, i.e. para. 44-45 and etc., disclose that “[0044] The processes and functions described herein can be implemented as a computer program which, when executed by one or more processors, can cause the one or more processors to perform the respective processes and functions. The computer program may be stored or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with, or as part of, other hardware. The computer program may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. For example, the computer program can be obtained and loaded into an apparatus, including obtaining the computer program through physical medium or distributed system, including, for example, from a server connected to the Internet. [0045] The computer program may be accessible from a computer-readable medium providing program instructions for use by or in connection with a computer or any instruction execution system. The computer readable medium may include any apparatus that stores, communicates, propagates, or transports the computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer-readable medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The computer-readable medium may include a computer-readable non-transitory storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a magnetic disk and an optical disk, and the like. The computer-readable non-transitory storage medium can include all types of computers readable medium, including magnetic storage medium, optical storage medium, flash medium, and solid state storage medium.”) to: receive a degraded image comprising aliasing artifacts (see Lung, i.e. in para. 23 and etc., disclose that “[0023] The motion estimation circuit 310 can receive a plurality of successive images or frames including at least a current frame and a previous frame. For example, the current frame and the previous frame can be a streaming of video frames, which may be low-resolution and have aliased quality, from a cloud source via Internet. As another example, the current frame and the previous frame can be game frames that are generated by a processor, e.g., a GPU, of a mobile phone…”); input the degraded image to an image enhancement network (see Lung, i.e. see fig. 3-4, in para. 22 and etc., disclose that “[0022] FIG. 3 shows a functional block diagram of an exemplary device 300 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The device 300 can enhance resolution (e.g., via a super-resolution technique) of a current frame and remove aliasing artifacts of the enhanced-resolution current frame by processing either the current frame only or the current frame and a previous frame that has been warped and aligned with the current frame, in order to retain the information contained in the current frame as much as possible. For example, the device 300 can include a motion estimation circuit 310, a warping circuit 320, and a temporal decision circuit 330.”; ); process, using the image enhancement network, the degraded image to remove one or more of the aliasing artifacts (see Lung, i.e. 3-4, para. 28 and etc., discloses that “[0028] FIG. 4 shows a functional block diagram of an exemplary frame processor 400 for processing images or frames with aliasing artifacts according to some embodiments of the disclosure. The frame processor 400 can be coupled to the temporal decision circuit 330 or to the frame fusion circuit 340 of the device 300. The frame processor 400 can include an attention reference frame generator 430 and an artificial intelligence (AI) neural network (NN) 440 coupled to the attention reference frame generator 430. The attention reference frame generator 430 can generate an attention reference frame based on a first high-resolution frame with aliasing artifacts and a second high-resolution frame with aliasing artifacts removed. For example, the attention reference frame generator 430 can compare the first frame and the second frame to capture key information of the first frame that is distinguishable from the second frame. The AI NN 440 can remove aliasing artifacts of another frame, e.g., a low-resolution frame, based on the attention reference frame. For example, the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”) ; and output, by the image enhancement network, a restored high-quality image (see Lung, i.e. in fig. 3-4, para. 28 and etc., disclose that “… the AI NN 440 can be trained by using the attention reference frame, and then enhance resolution of the low-resolution frame and remove the aliasing artifacts of the low-resolution frame with its resolution enhanced by only focusing on a portion of the low-resolution frame that corresponds to the key information contained in the attention reference frame”). With regard to claim 16, the claim is drawn to the non-transitory machine readable medium of Claim 15, further containing instructions that when executed cause the at least one processor to train the image enhancement network; wherein the instructions that when executed cause the at least one processor to train the image enhancement network comprise: instructions that when executed cause the at least one processor to obtain a high-quality image of an environment (see Lung, i.e. in fig. 7, step 710, para. 37 and etc., disclose that “[0037] At step 710, a first high-resolution frame with aliasing artifacts and a second high-resolution frame without aliasing artifacts removed are received.”); and instructions that when executed cause the at least one processor to perform an aliasing artifact simulation on the obtained high-quality image in order to generate at least one degraded image of the environment (see Lung, i.e. in fig. 7, step 730, para. 39, disclose that “[0039] At step 730, an AI NN can be trained with a low-resolution frame and the attention reference frame…”); and wherein the instructions that when executed cause the at least one processor to perform the aliasing artifact simulation comprise at least one of: instructions that when executed cause the at least one processor to perform a broken line artifact simulation to introduce one or more broken line artifacts on one or more objects in the environment of the high-quality image; and instructions that when executed cause the at least one processor to perform a jaggy artifact simulation to introduce jaggy edges to one or more other objects in the environment of the high-quality image (see Lung, i.e. in para. 19 and etc., disclose that “[0019] Anti-aliasing is a technique to solve the jaggies issue by oversampling an image at a rate higher than an intended final output and thus smoothing out the jagged edges of the image. For example, multisample anti-aliasing (MSAA), one of a variety of supersampling anti-aliasing (SSAA) algorithms proposed to address the aliasing occurring at the edges of the triangle 110, can simulate each pixel of a display as having a plurality of subpixels and determine the color of the pixel based on the number of the subpixels that are covered by an object image. FIG. 2 illustrates how the exemplary rectangle 110 can be displayed on the low-resolution raster display 100 with MSAA applied according to some embodiments of the disclosure. MSAA can simulate each pixel 120 as having 2×2 subpixels 220, each of which has a subsample point 230, and determine the color of the pixel 120 based on the number of the subsample points 230 that are covered by the rectangle 110. For example, when no subsample point 230A is covered by the triangle 110, no fragment will be generated for a pixel 120A with the sample points 230 and the pixel 120A is blank; when only one subsample point 230B is covered by the triangle 110, a pixel 120B with the sample points 230B will have a light color, e.g., one fourth of the color of the rectangle 110, which can be estimated by a fragment shader; when only two subsample points 230C are covered by the triangle 110, a pixel 120C with the sample points 230C will have a darker color than the pixel 120B, e.g., one half of the color of the rectangle 110; when as many as three subsample points 230D are covered by the triangle 110, a pixel 120D with the sample points 230D will have a darker color than the pixel 120C, e.g., three fourths of the color of the rectangle 110; when all of subsample points 230E are covered by the triangle 110, a pixel 120E with the sample points 230E will be have the darkest color the same as the pixel 120B shown in FIG. 1. The triangle 110 thus rendered on the display 100 with MSAA applied is shown having smoother edges than the triangle 110 rendered on the display 100 of FIG. 1 without MSAA applied…”). Allowable Subject Matter With regard to Claims 3-7, 10-14 and 17-20, claims are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming the corresponding rejections and/or objection (if any) set forth in the Office Action above. The following is a statement of reasons for the indication of allowable subject matter: With regard to claim 3, the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “the method of Claim 2, wherein performing the broken line artifact simulation comprises: applying an affine transform on the one or more objects in the environment of the high-quality image; synthesizing the transformed one or more objects on an image grid using interpolation; applying an inverse affine transform to the transformed one or more objects; performing another interpolation of the inverse transformed one or more objects; and outputting a first aliasing artifact image comprising at least one broken line artifact”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. With regard to claims 4-5, the claims are depending directly or indirectly from the independent Claim 3, each encompasses the required limitations recited in the independent claim discussed above. With regard to claim 6, the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “the method of Claim 2, wherein performing the jaggy artifact simulation comprises: detecting one or more edge transition regions in the high-quality image and outputting a detection map; identifying, using the detection map, one or more pixels in the detection map associated with an edge transition region; determining whether the one or more pixels have values within a threshold distance to one or more neighboring pixels in the high-quality image; and replacing the identified one or more pixels with the one or more neighboring pixels”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. With regard to claim 7, the claims are depending directly or indirectly from the independent Claim 6, each encompasses the required limitations recited in the independent claim discussed above. With regard to claim 10 the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “… to the electronic device of Claim 9, wherein, to perform the broken line artifact simulation, the at least one processing device is configured to: apply an affine transform on the one or more objects in the environment of the high-quality image; synthesize the transformed one or more objects on an image grid using interpolation; apply an inverse affine transform to the transformed one or more objects; perform another interpolation of the inverse transformed one or more objects; and output a first aliasing artifact image comprising at least one broken line artifact”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. With regard to claims 11-12, the claims are depending directly or indirectly from the independent Claim 10, each encompasses the required limitations recited in the independent claim discussed above. With regard to claim 13, the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “… the electronic device of Claim 9, wherein, to perform the jaggy artifact simulation, the at least one processing device is configured to: detect one or more edge transition regions in the high-quality image and outputting a detection map; identify, using the detection map, one or more pixels in the detection map associated with an edge transition region; determine whether the one or more pixels have values within a threshold distance to one or more neighboring pixels in the high-quality image; and replace the identified one or more pixels with the one or more neighboring pixels”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. With regard to claim 14, the claims are depending directly or indirectly from the independent Claim 13, each encompasses the required limitations recited in the independent claim discussed above. With regard to claim 17 the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “…the non-transitory machine readable medium of Claim 16, wherein the instructions that when executed cause the at least one processor to perform the broken line artifact simulation comprise: instructions that when executed cause the at least one processor to apply an affine transform on the one or more objects in the environment of the high-quality image; instructions that when executed cause the at least one processor to synthesize the transformed one or more objects on an image grid using interpolation; instructions that when executed cause the at least one processor to apply an inverse affine transform to the transformed one or more objects; instructions that when executed cause the at least one processor to perform another interpolation of the inverse transformed one or more objects; and instructions that when executed cause the at least one processor to output a first aliasing artifact image comprising at least one broken line artifact”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. With regard to claims 18-19, the claims are depending directly or indirectly from the independent Claim 17, each encompasses the required limitations recited in the independent claim discussed above. With regard to claim 20, the closest prior arts of record, Lung, do not disclose or suggest, among the other limitations, the additional required limitation of “…the non-transitory machine readable medium of Claim 16, wherein the instructions that when executed cause the at least one processor to perform the jaggy artifact simulation comprise: instructions that when executed cause the at least one processor to detect one or more edge transition regions in the high-quality image and outputting a detection map; instructions that when executed cause the at least one processor to identify, using the detection map, one or more pixels in the detection map associated with an edge transition region; instructions that when executed cause the at least one processor to determine whether the one or more pixels have values within a threshold distance to one or more neighboring pixels in the high-quality image; and instructions that when executed cause the at least one processor to replace the identified one or more pixels with the one or more neighboring pixels”. These additional features in combination with all the other features required in the claimed invention, are neither taught nor suggested by Lung. Therefore, claims 3-7, 10-14 and 17-20 are objected to. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Morton et al. (EP1471728 A1) disclose an invention relates to method and system for automatically reducing aliasing artifacts. The Art Unit (or Workgroup) location of your application in the USPTO has changed. To aid in correlating any papers for this application, all further correspondence regarding this application should be directed to Art Unit 2681. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacky X. Zheng whose telephone number is (571) 270-1122. The examiner can normally be reached on Monday - Friday, 9:00 am - 5:00 pm, alt. Friday Off. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on (571) 272-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACKY X ZHENG/Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §102
Mar 20, 2026
Interview Requested
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594150
CLIP FOR COUPLING TO SCAN BODY FOR ACCURATE INTRAORAL SCANNING
2y 5m to grant Granted Apr 07, 2026
Patent 12593073
POINT CLOUD ENCODING AND DECODING METHOD AND DEVICE BASED ON TWO-DIMENSIONAL REGULARIZATION PLANE PROJECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12584858
Rapid fresh digital-pathology method
2y 5m to grant Granted Mar 24, 2026
Patent 12587605
SERVICE PROVIDING SYSTEM WITH SYNCHRONIZATION OF ATTRIBUTE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12581046
PATHOLOGY REVIEW STATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+17.2%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 837 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month