Prosecution Insights
Last updated: April 19, 2026
Application No. 18/548,786

IMAGE PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM AND DEVICE

Non-Final OA §101§103
Filed
Sep 01, 2023
Examiner
HSU, JONI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
741 granted / 848 resolved
+25.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
882
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 848 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on September 1, 2023, September 22, 2023, January 11, 2024, and September 18, 2025 were filed after the mailing date of the application on September 1, 2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 9, 11, and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. MPEP 2106 III provides a flowchart for the subject matter eligibility test for product and processes. The claim analysis following the flowchart is as follows: Regarding Claim 1: Step 1: Is the claim to a process, machine, manufacture or composition of matter? Yes. It recites a method, which is a process. Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon? Yes. Claim 1 recites: An image processing method, comprising: acquiring edge image information in an expansion direction of an original image; selecting a target expansion mode from at least two candidate expansion modes according to the edge image information; and processing the original image by using the target expansion mode to obtain a target image. All of these steps can be done mentally because a person could look at an original image on a piece of paper and mentally acquire edge image information; mentally select a target expansion mode according to the edge image information; and mentally use the target expansion mode on the original image to draw a target image on a piece of paper. Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. Claim 1 does not recite any computer elements. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. Claim 1 does not recite any computer elements. Therefore, Claim 1 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 9, it depends from Claim 1 with additional limitations “wherein in response to determining that a plurality of expansion directions are provided, the method further comprises: acquiring each expansion ratio corresponding to each of the plurality of expansion directions, wherein an expansion ratio comprises a ratio of an expansion length corresponding to an expansion direction among the plurality of expansion directions to a side length of the original image corresponding to the expansion direction; and determining current expansion directions sequentially according to expansion ratios in ascending order, wherein from a second expansion direction among the plurality of expansion direction, when the original image is processed by using the target expansion mode, an original image corresponding to a current expansion direction among the plurality of expansion directions is a target image corresponding to a previous expansion direction among the plurality of expansion directions.” All of these steps can be done mentally and/or through mathematical relationships and calculations because a person can look at the original image on the piece of paper and mentally calculate each expansion ratio; and mentally determine current expansion directions. Therefore, Claim 9 recites abstract idea without additional elements. Similar to the discussion above with respect to Claim 1, no additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 9 is not eligible subject matter under 35 U.S.C. 101. Claim 11 is similar in scope to Claim 1 but with additional elements of computer-readable storage medium that can cause a processor to perform operations. The computer-readable storage medium and processor are generic computer components that do not integrate the abstract ideas recited in these claims into practical application or amount to significantly more (see MPEP 2106.05(a), (b), and (f)). The Examiner makes note that Applicant’s disclosure describes “computer-readable storage medium may be any tangible medium including or storing a program…computer-readable signal medium may include a data signal” (p. 23, lines 24-28). Thus, it is clear that a “computer-readable storage medium” is not a computer-readable signal medium that is a signal. Thus, Claim 11 is directed to statutory subject matter. Claim 12 is similar in scope to Claim 1 but with additional elements of memory that can cause a processor to perform operations. The memory and processor are generic computer components that do not integrate the abstract ideas recited in these claims into practical application or amount to significantly more (see MPEP 2106.05(a), (b), and (f)). Therefore, Claims 1, 9, 11, and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 11, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A) in view of Wang (US 20190007702A1). As per Claim 1, Toshihiro teaches an image processing method, comprising: acquiring edge image information in an expansion direction of an original image; selecting a target expansion mode from at least two candidate expansion modes; and processing the original image by using the target expansion mode to obtain a target image (according to an aspect of the present invention, detecting an edge of a picture formed by original image data for displaying a predetermined range, extending the detected edge of the picture to a region outside the predetermined range, calculating a surface surrounded by the extended edge of the picture outside the predetermined range, obtaining pixel information to be supplemented from pixel information of the original image data to be the same surface as the surrounded surface, and drawing the region outside the predetermined range with the pixel information to be supplemented, [0006], in the flowchart of Fig. 4, the processing steps of detecting the contour of the textured image and dividing the surface by the edge are performed after referring to the outside of the range of the texture, but in this embodiment, the procedure of the image processing method is different in that the processing steps of detecting the contour of the textured image and dividing the surface by the edge are performed before the vertex processing, [0019], Fig. 7 is a flowchart for still another 3D image process, this embodiment is different from the embodiment shown in Fig. 4 in that complicated processing is not performed in the pixel processing, [0020], this is not a process of searching for a texture image every time the range of the texture image is referred to as shown in Fig. 4, but is a process of completing all the calculations of edge extension and pixel interpolation for all the four side of the outer periphery of the texture image, the branch processing in the case of pixel processing shown in Fig. 4 is not performed, [0021]). However, Toshihiro does not teach selecting the target expansion mode according to the edge image information. However, Wang teaches selecting a target expansion mode from at least two candidate expansion modes according to the edge image information (selecting a boundary fill method can include obtaining a sample value of the reference sample by using the horizontal image boundary fill method when the vertical ordinate of the reference sample is between the upper boundary and the lower boundary of the reference image and the horizontal ordinate of the reference sample is outside the left boundary and the right boundary of the reference image, and obtaining the sample value of the reference sample by using the vertical image boundary fill method when the vertical ordinate of the reference sample is outside the upper boundary and the lower boundary of the reference images, [0018]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro to include selecting the target expansion mode according to the edge image information because Wang suggests that this adaptively selects a more reasonable fill method according to the coordinates, thereby optimizing the fill method [0039]. As per Claim 11, Toshihiro does not expressly teach a computer-readable storage medium for storing a computer program, wherein when the computer program is executed by a processor, the method is performed. However, Wang teaches a computer-readable storage medium for storing a computer program, wherein when the computer program is executed by a processor, the method is performed (all of steps of various methods according to the embodiments may be programmed to instruct the associated hardware to achieve the goals, which may be stored in a readable storage medium of a computer, [0093]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro to include a computer-readable storage medium for storing a computer program, wherein when the computer program is executed by a processor, the method is performed as suggested by Wang. It is well-known in the art that a program needs to be stored so that it can be accessed to be executed by the processor in order for the processor to perform operations. As per Claim 12, Toshihiro teaches a computer device, comprising a memory, a processor, wherein the processor performs the method (2 memory unit, 12 CPU, [0025]). However, Toshihiro does not expressly teach a computer program stored in the memory and executable by the processor, wherein when executing the computer program, the processor performs the method. However, Wang teaches a computer program stored in the memory and executable by the processor, wherein when executing the computer program, the processor performs the method [0093]. This would be obvious for the reasons given in the rejection for Claim 11. Claim(s) 2, 4, and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A) and Wang (US 20190007702A1) in view of Bonnevie (WO 2020/242508A1) and Choi (US 20210097646A1). As per Claim 2, Toshihiro and Wang are relied upon for the teachings as discussed above relative to Claim 1. Toshihiro teaches wherein the at least two candidate expansion modes comprise a first mode and a second mode [0006, 0019-0021], the first mode is to implement expansion by copying edge pixels [0006]. The combination of Toshihiro and Wang teaches selecting the target expansion mode from the at least two candidate expansion modes according to the edge image information, as discussed in the rejection for Claim 1. However, Toshihiro and Wang do not teach that the second mode is to implement expansion based on a neural network model. However, Bonnevie teaches implementing expansion based on a neural network model (generate extended images (that extend input images beyond their original borders) using a generative neural network, [0027]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro and Wang so that the second mode is to implement expansion based on a neural network model because Bonnevie suggests that this way, the neural network can be trained to encourage the image extension system to generate extended images that are difficult to distinguish from real images [0042]. However, Toshihiro, Wang, and Bonnevie do not teach that the second mode is to implement expansion based on the neural network model; and determining a complexity of image content of an edge region according to the edge image information; in response to determining that the complexity is less than or equal to a first preset complexity, determining the first mode as the target expansion mode; and in response to determining that the complexity is greater than a second preset complexity, determining the second mode as the target expansion mode, wherein the second preset complexity is greater than or equal to the first preset complexity. However, Choi teaches wherein the at least two candidate modes comprise a first mode and a second mode, the second mode is to implement based on a neural network model; and selecting the mode from the at least two candidate modes comprises: determining a complexity of image content; in response to determining that the complexity is less than or equal to a first preset complexity, determining the first mode (less complex mode) as the target mode; and in response to determining that the complexity is greater than a second preset complexity, determining the second mode (implement based on a complex neural network model) as the target mode, wherein the second preset complexity is greater than or equal to the first preset complexity (determining the degree of scene change of the frame to be processed, and selecting the neural networks for image processing to be applied to a frame having the degree of scene change equal to or greater than the predetermined criterion, wherein the neural networks for image processing to be applied to the frame may have a higher complexity than the neural networks for image processing to be applied to a frame having the degree of scene change less than the predetermined criterion, [0032]). Thus, this teaching from Choi can be implemented into the combination of Toshihiro, Wang, and Bonnevie so that the second mode is to implement expansion based on the neural network model; and determining a complexity of image content of an edge region according to the edge image information; in response to determining that the complexity is less than or equal to a first preset complexity, determining the first mode as the target expansion mode; and in response to determining that the complexity is greater than a second preset complexity, determining the second mode as the target expansion mode, wherein the second preset complexity is greater than or equal to the first preset complexity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, and Bonnevie so that the second mode is to implement expansion based on the neural network model; and determining a complexity of image content of an edge region according to the edge image information; in response to determining that the complexity is less than or equal to a first preset complexity, determining the first mode as the target expansion mode; and in response to determining that the complexity is greater than a second preset complexity, determining the second mode as the target expansion mode, wherein the second preset complexity is greater than or equal to the first preset complexity because Choi suggests that this effectively generates the image by appropriately applying, to various frames, neural network models trained in various ways [0009]. As per Claim 4, Toshihiro and Wang do not teach wherein in response to determining that the target expansion mode comprises the second mode, processing the original image by using the target expansion mode comprises: intercepting an original sub-image from the original image according to an expansion length corresponding to the expansion direction; generating a mask image according to the original sub-image, wherein a size of the mask image is greater than a size of the original sub-image; inputting the original sub-image and the mask image into a target image generation network to obtain a generated image output by the target image generation network; and intercepting an expansion image from the generated image and generating the target image according to the original image and the expansion image. However, Bonnevie teaches wherein in response to determining that the target expansion mode comprises the second mode, processing the original image by using the target expansion mode comprises: intercepting an original sub-image from the original image according to an expansion length corresponding to the expansion direction; generating a mask image according to the original sub-image, wherein a size of the mask image is greater than a size of the original sub-image (mask image 114 identifies which portions of the baseline image correspond to (i) the image 102, and (ii) the default pixel values, [0051]); inputting the original sub-image and the mask image into a target image generation network to obtain a generated image output by the target image generation network; and intercepting an expansion image from the generated image and generating the target image according to the original image and the expansion image (generative network input includes a baseline image 208 and a mask image 210, the baseline image 208 is generated by masking a portion of the target extended image, the mask image 210 identifies the portion of the target extended image that has been masked in the baseline image, [0064], generative neural network 110 processes the generative network input to generate a corresponding extended image 212, [0066]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro and Wang so that in response to determining that the target expansion mode comprises the second mode, processing the original image by using the target expansion mode comprises: intercepting an original sub-image from the original image according to an expansion length corresponding to the expansion direction; generating a mask image according to the original sub-image, wherein a size of the mask image is greater than a size of the original sub-image; inputting the original sub-image and the mask image into a target image generation network to obtain a generated image output by the target image generation network; and intercepting an expansion image from the generated image and generating the target image according to the original image and the expansion image because Bonnevie suggests that the mask image is needed in order to identify which portions of the image correspond to the original image [0051]. As per Claim 6, Toshihiro and Wang do not teach wherein the target image generation network is obtained by training a preset network model, the preset network model is implemented based on Generative Adversarial Networks, a training process of the preset network model involves a plurality of training stages, and types of loss functions corresponding to each of the plurality of training stage increase sequentially from a first training stage among the plurality of training stages to a last training stage among the plurality of training stages. However, Bonnevie teaches wherein the target image generation network is obtained by training a preset network model (generate extended images (that extend input images beyond their original borders) using a generative neural network, the generative neural network is jointly trained, [0027]), the preset network model is implemented based on Generated Adversarial Networks (adversarial training system jointly trains the generative neural network 110 using an adversarial loss objective function, [0055]), a training process of the preset network model involves a plurality of training stages, and types of loss functions corresponding to each of the plurality of training stage increase sequentially from a first training stage among the plurality of training stages to a last training stage among the plurality of training stages (trains the generative neural network using the adversarial loss objective function over multiple training iterations, the system adjusts the current values of the generative neural network parameters based on the adversarial loss objective function, where the adversarial loss objective function depends on the discriminative output generated by the discriminative neural network, [0121]). This would be obvious for the reasons given in the rejection for Claim 2. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A), Wang (US 20190007702A1), Bonnevie (WO 2020/242508A1), and Choi (US 20210097646A1) in view of Golin (US005440350A) and Seo (US 20100165088A1). Toshihiro, Wang, Bonnevie, and Choi are relied upon for the teachings as discussed above relative to Claim 2. However, Toshihiro, Wang, Bonnevie, and Choi do not teach wherein the complexity is measured by using a first indicator; the first indicator comprises a mean square error of pixel values of the edge region; wherein complexity corresponding to the mean square error is in direct proportion to the mean square error. However, Golin teaches wherein the complexity is measured by using a first indicator; the first indicator comprises a mean square error of pixel values of the edge region; wherein complexity corresponding to the mean square error is in direct proportion to the mean square error (determines the difference between the mean-square-error associated with the selected neighboring block and the mean-square-error associated with the block selected by block selector, and then compares this difference with a threshold, if the difference between these two values is less than a predetermined threshold, then a determination is made that the block contains a moving edge, if the calculated threshold has not fallen below the threshold, then a determination is made that the block does not contain a moving edge, col. 4, lines 18-31). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, Bonnevie, and Choi so that the complexity is measured by using a first indicator; the first indicator comprises a mean square error of pixel values of the edge region; wherein complexity corresponding to the mean square error is in direct proportion to the mean square error because Golin suggests that this is an efficient way to determine whether the edge is a moving edge and is complex (col. 4, lines 18-31). However, Toshihiro, Wang, Bonnevie, Choi, and Golin do not teach wherein the complexity is measured by using a second indicator; the second indicator comprises a grayscale value of the edge region; and complexity corresponding to the grayscale value is inversely proportional to the grayscale value. However, Seo teaches wherein the complexity is measured by using an indicator; the indicator comprises a grayscale value of the edge region; and complexity corresponding to the grayscale value is inversely proportional to the grayscale value (image frame having the highest dynamic range based on a histogram of grayscale, the image frame having the highest complexity based on the number of edges of the image frame, [0016]). Since Golin teaches the first indicator (col. 4, lines 18-31), this teaching from Seo can be combined with the teaching from Golin so that the complexity is measured by using a second indicator; the second indicator comprises a grayscale value of the edge region; and complexity corresponding to the grayscale value is inversely proportional to the grayscale value. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, Bonnevie, Choi, and Golin so that the complexity is measured by using a second indicator; the second indicator comprises a grayscale value of the edge region; and complexity corresponding to the grayscale value is inversely proportional to the grayscale value because Seo suggests that this is an efficient way to determine the complexity because the brightness can be determined from the grayscale value, and the higher the brightness, the higher the complexity [0076]. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A), Wang (US 20190007702A1), Bonnevie (WO 2020/242508A1), and Choi (US 20210097646A1) in view of Liu (US 20210279841A1) and Dhatt (US011435460B2). Toshihiro, Wang, Bonnevie, and Choi are relied upon for the teachings as discussed above relative to Claim 4. However, Toshihiro, Wang, Bonnevie, and Choi do not teach wherein before inputting the original sub-image and the mask image into the target image generation network, the method further comprises: performing image style recognition on the original image to obtain a target image style; and correspondences between different image styles and image generation networks. However, Liu teaches wherein before inputting the original sub-image and the mask image (coverage mask for a tile, [0462]) into the target image generation network, the method further comprises: performing image style recognition on the original image to obtain a target image style; and the target image generation network corresponds to the image style (discriminator may determine if an image is generated by a generator 208 in a GAN conforms to a specific style, [0071], generator 302 takes as input 304 an image file of dimension (W, H) and outputs 314 an image file of dimension (W*K, H*K), output 314 is larger than input 304 and is expanded by a factor of K, input 304 is expanded when its dimensions are increased by a factor of K, expanding an input 304 by a factor of K, [0080]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, Bonnevie, and Choi so that before inputting the original sub-image and the mask image into the target image generation network, the method further comprises: performing image style recognition on the original image to obtain a target image style; and the target image generation network corresponds to the image style because Liu suggests that this way, the generated image corresponds to the style of the original image [0071]. However, Toshihiro, Wang, Bonnevie, Choi, and Liu do not teach querying preset correspondences according to the target image style to obtain the target image generation network, wherein the preset correspondences comprises correspondences between different image styles and image generation networks. However, Dhatt teaches performing image style recognition on the original image to obtain a target image style; and querying preset correspondences according to the target image style to obtain the target image generation network, wherein the preset correspondences comprises correspondences between different image styles and image generation networks (selects a style image corresponding to the style they would like to apply to the image, the processor then selects the corresponding trained neural network associated with the selected style image, col. 4, lines 44-54). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, Bonnevie, Choi, and Liu to include querying preset correspondences according to the target image style to obtain the target image generation network, wherein the preset correspondences comprises correspondences between different image styles and image generation networks because Dhatt suggests that this way, the neural network that was trained to generate the desired style is selected (col. 4, lines 44-54). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A), Wang (US 20190007702A1), Bonnevie (WO 2020/242508A1), and Choi (US 20210097646A1) in view of Petrangeli (US 20220156886A1). Toshihiro, Wang, Bonnevie, and Choi are relied upon for the teachings as discussed above relative to Claim 6. However, Toshihiro and Wang do not teach wherein the first training stage comprises a reconstruction loss function; a second training stage among the training stages comprises the reconstruction loss function, an adversarial loss function, and a perceptual loss function. However, Bonnevie teaches wherein the first training stage comprises a reconstruction loss function; a second training stage among the training stages comprises the reconstruction loss function, an adversarial loss function (in addition to training the generative neural network using the adversarial loss objective function, the system may additionally train the generative neural network using a reconstruction loss objective function, [0123]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro and Wang so that the first training stage comprises a reconstruction loss function; a second training stage among the training stages comprises the reconstruction loss function, an adversarial loss function because Bonnevie suggests that this way, the current values of the generative neural network parameters are adjusted based the loss objective function so that that the generated extended image is more similar to the target image [0123]. However, Toshihiro, Wang, and Bonnevie do not teach the second training stage comprises the reconstruction loss function, the adversarial loss function, and a perceptual loss function; and a third training stage among the training stages comprises the reconstruction loss function, the adversarial loss function, the perceptual loss function, and a structure similarity loss function. However, Petrangeli teaches wherein a second training stage among the training stages comprises the reconstruction loss function, an adversarial loss function, and a perceptual loss function; and a third training stage among the training stages comprises the reconstruction loss function, the adversarial loss function, the perceptual loss function, and a structure similarity loss function (multi-view loss functions include a reconstruction loss, perceptual loss, structural similarity loss, and an adversarial loss, [0058]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, and Bonnevie so that the second training stage comprises the reconstruction loss function, the adversarial loss function, and a perceptual loss function; and a third training stage among the training stages comprises the reconstruction loss function, the adversarial loss function, the perceptual loss function, and a structure similarity loss function as suggested by Petrangeli. It is well-known in the art that perceptual loss training enhances image quality, preserving texture and structure, resulting in more visually pleasing outputs. It is well-known in the art that structural similarity loss training improves perceptual image quality, producing sharper edges, and better preserving structural details. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A), Wang (US 20190007702A1), Bonnevie (WO 2020/242508A1), and Choi (US 20210097646A1) in view of Chang (US 20240208419A1). Toshihiro, Wang, Bonnevie, and Choi are relied upon for the teachings as discussed above relative to Claim 4. However, Toshihiro, Wang, Bonnevie, and Choi do not teach wherein generating the target image according to the original image and the expansion image comprises: performing pixel weighted splicing on the original image and the expansion image to generate the target image, wherein an overlapping region of the original image and the expansion image comprises a plurality of pixel positions a weight magnitude of a first pixel corresponding to each pixel position among the plurality of pixel positions is negatively correlated with a distance of the each pixel position relative to the original image, a weight magnitude of a second pixel corresponding to the each pixel position is positively correlated with the distance of the each pixel position relative to the original image, the first pixel is derived from the original image, and the second pixel is derived from the expansion image. However, Chang teaches wherein generating the target image according to the original image and the expansion image comprises: performing pixel weighted splicing on the original image and the expansion image to generate the target image, wherein an overlapping region of the original image and the expansion image comprises a plurality of pixel positions a weight magnitude of a first pixel corresponding to each pixel position among the plurality of pixel positions is negatively correlated with a distance of the each pixel position relative to the original image, a weight magnitude of a second pixel corresponding to the each pixel position is positively correlated with the distance of the each pixel position relative to the original image, the first pixel is derived from the original image, and the second pixel is derived from the expansion image (weights corresponding to the overlapping parts of the two original images are linearly related to a distance between the coordinates of pixels and a splicing line, panoramic image generated by splicing top views, [0049]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro, Wang, Bonnevie, and Choi so that generating the target image according to the original image and the expansion image comprises: performing pixel weighted splicing on the original image and the expansion image to generate the target image, wherein an overlapping region of the original image and the expansion image comprises a plurality of pixel positions a weight magnitude of a first pixel corresponding to each pixel position among the plurality of pixel positions is negatively correlated with a distance of the each pixel position relative to the original image, a weight magnitude of a second pixel corresponding to the each pixel position is positively correlated with the distance of the each pixel position relative to the original image, the first pixel is derived from the original image, and the second pixel is derived from the expansion image because Chang suggests that this way there is a smooth transition of the overlapping regions [0049]. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toshihiro (JP 2008-204035A) and Wang (US 20190007702A1) in view of Suzuki (US 20050190202A1). Toshihiro and Wang are relied upon for the teachings as discussed above relative to Claim 1. However, Toshihiro and Wang do not expressly teach wherein in response to determining that a plurality of expansion directions are provided, the method further comprises: acquiring each expansion ratio corresponding to each of the plurality of expansion directions, wherein an expansion ratio comprises a ratio of an expansion length corresponding to an expansion direction among the plurality of expansion directions to a side length of the original image corresponding to the expansion direction; and determining current expansion directions sequentially according to expansion ratios in ascending order, wherein from a second expansion direction among the plurality of expansion directions, when original image is processed by using the target expansion mode, an original image corresponding to a current expansion direction among the plurality of expansion directions is a target image corresponding to a previous expansion direction among the plurality of expansion directions. However, Suzuki teaches wherein in response to determining that a plurality of expansion directions are provided, the method further comprises: acquiring each expansion ratio corresponding to each of the plurality of expansion directions, wherein an expansion ratio comprises a ratio of an expansion length corresponding to an expansion direction among the plurality of expansion directions to a side length of the original image corresponding to the expansion direction; and determining current expansion directions sequentially according to expansion ratios in ascending order, wherein from a second expansion direction among the plurality of expansion directions, when original image is processed by using the target expansion mode, an original image corresponding to a current expansion direction among the plurality of expansion directions is a target image corresponding to a previous expansion direction among the plurality of expansion directions (horizontal expanding circuit 3 expands the image signal in the horizontal direction according to the horizontal expansion ratio and outputs the image signal expanded in the horizontal direction to the vertical expanding circuit 4, vertical expanding circuit 4 expands the horizontally expanded image signal output from the horizontal expanding circuit 3 in the vertical direction according to the vertical expansion ratio and outputs the image signal expanded in the horizontal and vertical directions to the display, displays the image signal expanded in the horizontal and vertical directions, [0042]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Toshihiro and Wang so that in response to determining that a plurality of expansion directions are provided, the method further comprises: acquiring each expansion ratio corresponding to each of the plurality of expansion directions, wherein an expansion ratio comprises a ratio of an expansion length corresponding to an expansion direction among the plurality of expansion directions to a side length of the original image corresponding to the expansion direction; and determining current expansion directions sequentially according to expansion ratios in ascending order, wherein from a second expansion direction among the plurality of expansion directions, when original image is processed by using the target expansion mode, an original image corresponding to a current expansion direction among the plurality of expansion directions is a target image corresponding to a previous expansion direction among the plurality of expansion directions because Suzuki suggests that this way, the image is expanded to the desired ratio without causing distortion [0009]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JH /JONI HSU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592028
METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586306
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MODELING OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12586260
CREATING IMAGE ENHANCEMENT TRAINING DATA PAIRS
2y 5m to grant Granted Mar 24, 2026
Patent 12581168
A METHOD FOR A MEDIA FILE GENERATING AND A METHOD FOR A MEDIA FILE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12561850
IMAGE GENERATION WITH LEGIBLE SCENE TEXT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 848 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month