Prosecution Insights
Last updated: April 19, 2026
Application No. 18/757,925

ELECTRONIC DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR OBTAINING LABELING INFORMATION FOR TRAINING OF NEURAL NETWORK

Non-Final OA §103
Filed
Jun 28, 2024
Examiner
NGUYEN, DAVID VAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Thinkware Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
78.6%
+38.6% vs TC avg
§102
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement Receipt is acknowledged of the information disclosure statement (IDS) submitted on 12/11/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 6, 8-10, 14, and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lyu et al (CN 115171088 A), Pitie et al (US 7796812 B2), and Ni et al (EP 3907660 A1), hereinafter Lyu, Pitie, and Ni respectively. Regarding claim 1, Lyu teaches an electronic device, comprising: memory, comprising one or more storage mediums, storing instructions, and at least one processor comprising processing circuitry (“The application relates to image recognition technology field, especially relates to an image generation method, device, electronic device and storage medium” - Abstract); and wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: obtain a first image including a virtual license plate including text objects (“then based on the initial license plate type and random license plate number, a standard style intermediate license plate is generated” – Pg 4, Par 5, Lines 5-6. [NOTE: the generated intermediate license plate refers to the virtual license plate]), using a template indicating a type of license plates (“Specifically, different license plate types corresponding to different standard license plate background image and standard license plate template information, for example, the color of the standard license plate background image corresponding to the small vehicle license plate is blue, the color of the standard license plate background image corresponding to the new energy vehicle license plate is gradually green, and different license plate types corresponding to different standard license plate template information.” – Pg 9, Par 4, Lines 1-5 [NOTE: standard style refers to the template info corresponding to the national motor vehicle license plate standard]), based on identifying a real license plate attached to a vehicle from a second image corresponding to the vehicle (Fig 4 shows an image with an initial (real) license plate attached to a vehicle, Fig 4), PNG media_image1.png 247 351 media_image1.png Greyscale Lyu et al (CN 115171088 A), Fig 4 obtaining position information of the real license plate (“the license plate label information comprises the initial license plate type of the initial license plate, and the initial license plate position of the initial license plate in the target vehicle image” - Pg 2, Par 1); and obtaining a third image including the vehicle to which the virtual license plate is attached by merging the second image with the first image (“based on the initial license plate position, replacing the initial license plate in the target vehicle image as the middle license plate; through the trained style migration model, performing style migration to the middle license plate in the target vehicle image, comprising obtain vehicle image of target license plate of real style.“ - Abstract). Lyu does not teach obtaining color information regarding at least a portion of the real license plate, using the color information, change at least one color of pixels included in the first image, and storing a pair of the third image, the text objects, and position information indicating a position within the third image to which the first image is attached, as label information for training a neural network. However, Pitie teaches obtaining color information regarding at least a portion of the real license plate (“determining, for each of the first and second color distributions, a one-dimensional histogram along a direction in a color space” – Abstract. [NOTE: Pitie does not explicitly disclose obtaining color information regarding the real license plate image, but instead obtains color information from images in general. After the combination, obtaining the color information as taught by Pitie can be done alongside obtaining the position information of the real license plate image as taught by Lyu to teach obtaining position information of the real license plate and color information regarding at least a portion of the real license plate]), and using the color information, change at least one color of pixels included in the first image (“for each of the first and second color distributions, a one-dimensional histogram along a direction in a color space; matching the one-dimensional histogram determined for the first color distribution and the one-dimensional histogram determined for the second color distribution so as to generate a transform mapping; transforming the first color distribution based on the generated transform mapping” – Abstract [NOTE: Pitie describes a method for matching color in images which consists of using a transform mapping to remap the pixels of one image to that of the other’s color map. This teaches changing one color of pixels in the virtual license plate]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu to incorporate the teachings if Pitie to obtain color information from the real license plate and change at least one color of pixels in the virtual license plate. Adjusting the colors of the virtual license plate to closely resemble the real image will create realistic and diverse training images that cover real world scenarios such as blurring and bad lighting. This is advantageous for the neural network because training on realistic images improves the accuracy of license plate recognition. Lyu in view of Pitie still does not teach storing a pair of the third image, the text objects, and position information indicating a position within the third image to which the first image is attached, as label information for training a neural network. However, Ni teaches storing a pair of the third image, the text objects, and position information indicating a position within the third image to which the first image is attached, as label information for training a neural network (“and the license plate picture corresponding to each license plate in the set of license plates is synthesized with the corresponding vehicle appearance picture to obtain a training image corresponding to the corresponding license plate.” – Par 105, Lines 5-7 [NOTE: the training image refers to the image of a synthetic license plate replacing the real license plate. Ni also discloses that a feature map is used to find the position of the real license on the vehicle to replace with the synthetic one. Therefore, that position information can also be used for training a neural network; the training image (synthesized image) is the third image, the first image is the license plate image].). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Pitie to incorporate the teachings of Ni to store the third image and is associated information as label information for training a neural network. Using the pseudo-realistic images created synthetically will offer the same diverse training with accurate labeling for the neural network while no longer requiring the use of personal info such as actual license plate numbers. Regarding claim 9, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 1. Therefore, method claim 9 corresponds to the electronic device disclosed in claim 1 and is rejected for the same reasons obviousness as used above. Regarding claim 16, the claim describes a non-transitory computer readable storage medium (CRM) performing the steps for the function of the electronic device as disclosed in claim 1. Therefore, non-transitory CRM claim 16 corresponds to the electronic device disclosed in claim 1 and is rejected for the same reasons obviousness as used above. Regarding claim 2, Lyu in view of Pitie and Ni teach the electronic device of claim 1. Lyu further teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: project the virtual license plate included in the first image onto the real license plate attached to the vehicle (“based on the initial license plate position, replacing the initial license plate in the target vehicle image as the middle license plate” – Pg 6, Lines 52-53), using the position information indicating vertices of the real license plate (“the initial license plate position can be represented by the left upper corner and the right lower corner coordinate of the license plate, it also can be represented by the four corner coordinate of the license plate, for example, the license plate type of the initial license plate 1 is a blue card, the initial license plate position is [(x1, y1), (x2, y2), (x3, y3), (x4, y4)].” – Pg 7, Lines 34-39). It would have been obvious to one of ordinary skill in the art to further include the teachings of Lyu to use the position information of the real license plate image to project the virtual license plate in its place. Finding the vertices of the license plate in the real image is a common strategy to locate the region of interest. Using this positional information, the virtual license plate could then accurately replace the real image creating a realistic new image. Lyu in view of Pitie still does not teach training the neural network for identifying a distinct real license plate different from the real license plate, from a fourth image different from the third image, using the label information. However, Ni further teaches training the neural network for identifying a distinct real license plate different from the real license plate, from a fourth image different from the third image(“In this way, the license plate recognition model trained by the multiple training images may accurately recognize various types of license plates, which facilitates to improve the practicability of the method for recognizing the license plate.” – Par 117, Lines 3-5 [NOTE: Ni discloses training the license plate recognition neural network with a collection of both real and synthetic license plates so that it can accurately recognize license plates from different images, Par 104. These training images can be substituted by the separate fourth image different from the merged third image of the virtual plate and real image as taught by Lyu. After the combination, the label information obtained by Lyu can also be used by the neural network to train on the collection of real and synthetic images from Ni. This would then teach training the neural network for identifying a distinct real license plate different from the real license plate, from a fourth image different from the third image, using the label information.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Pitie and Ni to incorporate the teachings of Ni to use a different image from the third image to train the license plate recognition neural network. It is very common in the art to supply a neural network with diverse training images in order to increase the accuracy of the detection. Training from different images alongside the merged virtual plate and real license prevents any bias that may occur if the training only included the merged images. Regarding claim 10, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 2. Therefore, method claim 10 corresponds to the electronic device disclosed in claim 2 and is rejected for the same reasons obviousness as used above. Regarding claim 17, the claim describes a non-transitory computer readable storage medium (CRM) performing the steps for the function of the electronic device as disclosed in claim 2. Therefore, non-transitory CRM claim 17 corresponds to the electronic device disclosed in claim 2 and is rejected for the same reasons obviousness as used above. Regarding claim 6, Lyu in view of Pitie and Ni teaches the electronic device of claim 1. Lyu does not teach wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: using the color information indicating RGB values corresponding to color of the real license plate, based on the changing each of RGB values corresponding to color of the first image, change the at least one color of the pixels of the first image. However, Pitie further teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: using the color information indicating RGB values corresponding to color of the real license plate (“The histogram of a color image is 3D because each pixel has associated with it 3 values for red, green and blue (in RGB space), or luminance, and two color components (in YUV space).” – Col 6, Lines 33-36. [NOTE: Pitie discloses that a histogram of an image is created to show the RGB values of each pixel. Pitie does not describe that the color information corresponds to the real license plate. After combining Lyu’s images of real license plates attached to cars and generation of the virtual license plates, a color histogram that depicts both the real and virtual plates can be created to teach this element.]), based on the changing each of RGB values corresponding to color of the first image, change the at least one color of the pixels of the first image (“The resulting transformation t maps a pixel of color I.sub.1, I.sub.2, I.sub.3 onto t(I.sub.1,I.sub.2,I.sub.3)=(t.sub.1(I.sub.1),t.sub.2(I.sub.2),t.sub.3(I.s- ub.3)).” – Col 8, Lines 22-23. [NOTE: Pitie demonstrates mapping pixels to new values in one image based off the values of the reference image using the RGB color information. Pitie does not specifically state that the images are of license plates. After combining Pitie with the real image of license plates and the generated virtual license plate taught by Lyu, the color of the pixels in the virtual plate can be changed based on the real image and would then teach this element.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu to further incorporate the teachings of Pitie to use the color information indicating RGB values to change the first image to correspond to the colors of the real license plate. This technique would emulate the real-world elements such as lighting conditions and blurring onto the virtual license plates which are more idealized. Using these modified virtual license plates as training data for the license plate recognition neural network would have the predictable result of allowing the model to accurately identify license plate in the real-world environment. Regarding claim 14, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 6. Therefore, method claim 14 corresponds to the electronic device disclosed in claim 6 and is rejected for the same reasons obviousness as used above. Regarding claim 8, Lyu in view of Pitie and Ni teaches the electronic device of claim 1. Lyu further teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: obtain the first image including the virtual license plate including mixed text objects by mixing the text objects with a designated size using the template (“Based on the standard license plate template information, respectively adjusting the size of the standard license plate background image and each character, and the position of each first character in the standard license plate background image”- Pg 2, Lines 18-20. [NOTE: Figure 7b shows an example of a virtual license that mixes alphabetical characters, numbers, and symbols including ones from different countries]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to further incorporate the teachings of Lyu to obtain the first virtual image including mixed text objects with a designated size using the template. Mixing text objects such as characters, numbers, and symbols would have the predicted result of developing realistic license plates which creates more diverse training images for the neural network to train on. PNG media_image2.png 684 680 media_image2.png Greyscale Lyu et al (CN 115171088 A), Fig 7b and Fig 8 Claim(s) 3-4, 11-12, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lyu, Pitie, Ni, Chua et al (US 20220101037 A1) and Lee (US 20170133006 A1), hereinafter Chua and Lee respectively. Regarding claim 3, Lyu in view of Pitie and Ni teach the electronic device of claim 2. Lyu in view of Ni does not teach wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: obtain the color information by sampling the pixels included in the at least portion of the real license plate different from another portion of the real license plate including other text objects different from the text objects, using the color information, change color of at least a portion of the virtual license plate including the text objects of the virtual license plate and different from another portion of the virtual license plate, and generate a noise for the virtual license plate, based on changing parameters indicating pixels included in the virtual license plate of which the color of the at least the portion is changed. However, Pitie further teaches using the color information, change color of at least a portion of the virtual license plate including the text objects of the virtual license plate and different from another portion of the virtual license plate (“The resulting transformation t maps a pixel of color I.sub.1, I.sub.2, I.sub.3 onto t(I.sub.1,I.sub.2,I.sub.3)= (t.sub.1(I.sub.1),t.sub.2(I.sub.2),t.sub.3(I.s- ub.3)).” – Col 8, Lines 22-23 [NOTE: Pitie discloses a color matching technique that is pixel-by-pixel which implies that the pixel colors changed in the virtual license plate are based off the color information in the real license. Therefore, one portion of the virtual license plate with the text may not have the same colors as a different portion if the color information of the real image determines that to be the case]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Ni to further incorporate Pitie to change the color of the pixels corresponding to the text objects based on the color information of the real license image. With the goal of generating realistic virtual license plates in mind, matching the colors pixel by pixel will ensure that different portions of the virtual license plate could have different colors in order to match the style of the real image. This will have the predicted result of emulating the style of the real image so that the virtual image can incorporate the non-ideal lighting or blurring that a real image would have. Lyu in view of Pitie and Ni still does not teach wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: obtain the color information by sampling the pixels included in the at least portion of the real license plate different from another portion of the real license plate including other text objects different from the text objects. However, Chua teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: obtain the color information by sampling the pixels included in the at least portion of the real license plate different from another portion of the real license plate including other text objects different from the text objects (“In this embodiment, the characters have a high pixel value and the background have a low pixel value. When the pixel value of the characters is known, a binary segmentation method can be applied to generate character segments corresponding to each character.” – Par 34, Lines 2-6. [NOTE: Chua discloses a method of isolating the pixels that are considered the text objects. The color information obtained through the methods of Pitie can then be combined in order to specifically obtain the color information by sampling only the text object pixels.]). It would have been obvious before the effective filing date of the present application to modify Lyu in view of Pitie and Ni to incorporate the teachings of Chua to sample the pixels in the real license plate that correspond to the text objects. By sampling the pixels correlated to the text objects, the color information can be obtained for those pixels in order to be mapped to the colors in the real image. This would result in the predictable outcome of a more accurate distinction between the text objects pixels and background pixels of the plate for the color mapping. Lyu in view of Pitie, Ni, and Chua still does not teach generating a noise for the virtual license plate, based on changing parameters indicating pixels included in the virtual license plate of which the color of the at least the portion is changed. However, Lee teaches generating a noise for the virtual license plate (“The noisy training data may be data in which the clean training data is distorted, or in which clean training data and training noise data are mixed. For example, the noisy training data may be data in which the clean training data and a variety of noise data are mixed, or may be distorted data generated by adding a variety of modifications (for example, rotation, partial covering, a change in color or intensity of illumination, or other modifications in the case of image data” – Par 52, Lines 15-23.), based on changing parameters indicating pixels included in the virtual license plate of which the color of the at least the portion is changed (“The resulting transformation t maps a pixel of color I.sub.1, I.sub.2, I.sub.3 onto t(I.sub.1,I.sub.2,I.sub.3)=(t.sub.1(I.sub.1),t.sub.2(I.sub.2),t.sub.3(I.sub.3)).” – Pitie: Col 8, Lines 22-23) [NOTE: Pitie discloses the pixels of an image being remapped with reference to the color information of another image which is a change in the pixels’ parameters. One of ordinary skill in the art could then add noise, as taught by Lee, to the pixels in the virtual license plate image after they have changed in color using the color information of the real image. Then the virtual license plate images with added noise can be used for training the license plate recognition model.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Pitie, Ni, and Chua to incorporate the teachings of Lee to generate noise onto the pixels included in the virtual license plate whose color have changed. Adding noise to the synthesized license plate is a common technique for incorporating real-world conditions to what would normally be idealized images of license plates. This modification to the virtual license plates could then be used as training for the license plate recognition neural network to yield the predicted outcome of a more accurate model that can recognize the plates even with images that appear blurred, distorted, or noisy. Regarding claim 11, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 3. Therefore, method claim 11 corresponds to the electronic device disclosed in claim 3 and is rejected for the same reasons obviousness as used above. Regarding claim 18, the claim describes a non-transitory computer readable storage medium (CRM) performing the steps for the function of the electronic device as disclosed in claim 3. Therefore, non-transitory CRM claim 18 corresponds to the electronic device disclosed in claim 3 and is rejected for the same reasons obviousness as used above. Regarding claim 4, Lyu, Pitie, Ni, Chua, and Lee teach the electronic device of claim 3. Lyu further teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: using the template including a text template indicating the text objects based on a designated font and a background template indicating a background of the virtual license plate (“obtain each first character corresponding to the random license plate number from the standard license plate character library; then, based on the standard license plate template information, adjust the standard license plate background respectively. The size of the image and each character, and the position of each first character in the standard license plate background image” – Pg 9, Lines 13-16 [NOTE: Alongside the background template, a license plate character library is used to obtain character for the virtual license plate. These characters can include numbers and symbols from different languages (see Fig 7b from rejection of claim 8).]), obtain the first image including the virtual license plate, by attaching, to a designated position of the background template, at least one of the text objects including a text object indicating a number and a text object indicating a character (“finally, based on the adjusted standard license plate background image and each adjusted first character, an intermediate license plate is generated.” – Pg 9, Lines 16-18, Fig. 7b). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to further incorporate Lyu to obtain the first virtual license plate by attaching text objects to the background plate template. Similar to using a template for the background of the virtual license plate, the text objects template would allow the use of realistic characters in known fonts to emulate real-life license plates from different countries. Attaching these text objects to the background templates would have a predictable result of generating virtual license plates that look like real license plates including the font and style. Regarding claim 12, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 4. Therefore, method claim 12 corresponds to the electronic device disclosed in claim 4 and is rejected for the same reasons obviousness as used above. Regarding claim 19, the claim describes a non-transitory computer readable storage medium (CRM) performing the steps for the function of the electronic device as disclosed in claim 4. Therefore, non-transitory CRM claim 19 corresponds to the electronic device disclosed in claim 4 and is rejected for the same reasons obviousness as used above. Claim(s) 5, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lyu, Pitie, Ni, and Xia et al (CN 115082917 A), hereinafter Xia. Regarding claim 5, Lyu in view of Pitie, and Ni teach the electronic device of claim 2. Lyu in view of Pitie and Ni does not teach wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: using the position information, using a matrix for mapping each of vertices of the virtual license plate to each of vertices of the real license plate, project the virtual license plate onto the real license plate, in the second image. However, Xia teaches wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: using the position information, using a matrix for mapping each of vertices of the virtual license plate to each of vertices of the real license plate (“The license plate projection module is used to determine each type of license plate by performing a position information transformation calculation on the position information of the vertex of the standard license plate corresponding to each type of license plate and the position information of the vertex of the corresponding real license plate. The corresponding position information transformation matrix” – Pg 2, Lines 8-11), project the virtual license plate onto the real license plate, in the second image (“through the position information conversion matrix, projecting the standard license plate to the real license plate position” – Pg 2, Lines 11-13. [NOTE: Lyu in view of Pitie and Ni also teaches this element in the limitation, but it is more explicitly described in Xia]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Pitie and Ni to incorporate the teachings of Xia to use the position information to use a matrix to map the vertices of the virtual license plate to each of the vertices of the real license plate to project the virtual plate over the real plate. Mapping the vertices of the virtual and real license plate to each other is a known technique that results in the predicted outcome of accurate replacement of the virtual plate onto the real image. Regarding claim 13, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 5. Therefore, method claim 13 corresponds to the electronic device disclosed in claim 5 and is rejected for the same reasons obviousness as used above. Regarding claim 20, the claim describes a non-transitory computer readable storage medium (CRM) performing the steps for the function of the electronic device as disclosed in claim 5. Therefore, non-transitory CRM claim 20 corresponds to the electronic device disclosed in claim 5 and is rejected for the same reasons obviousness as used above. Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lyu, Pitie, Ni, and Chowdhury et al (“Vehicle License Plate Detection Using Image Segmentation and Morphological Image Processing”), hereinafter Chowdhury. Regarding claim 7, Lyu in view of Pitie and Ni teach the electronic device of claim 1. Lyu in view of Pitie and Ni does not teach wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: based on identifying pixels corresponding to the real license plate attached to the vehicle using pixels included in the second image, using an area including the pixels corresponding to the real license plate from the second image, identify the position information of the real license plate. However, Chowdhury teaches the wherein the instructions, when executed by the at least one processor individually or collectively, cause the electronic device to: based on identifying pixels corresponding to the real license plate attached to the vehicle using pixels included in the second image (“Where the SCW algorithm traverses the entire image and changes the value of each and every pixel to either 0 or 1, based on a comparison between a threshold value and the ratio of the statistical measurements of both of the windows [4,5]. As a result, this algorithm keeps the pixels that have the possibility to be a part of the ROI” – Introduction Par 2. [NOTE: Chowdhury describes an algorithm for segmenting the license plate in the image by identifying pixels in the region of interest.]), using an area including the pixels corresponding to the real license plate from the second image, identify the position information of the real license plate (“sliding concentric window (SCW) algorithm was used to keep the pixels that have similar characteristics as the pixels of the license plates region, then after implementing proper binarization and connected component analysis technique, the vehicle number plates location was determined” – Literature Review, Par 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Lyu in view of Pitie and Ni to incorporate the teachings of Chowdhury to identify the position of the real license plate in the image based on identifying pixels corresponding to the real license plate attach to the vehicle in the image. Pixel segmentation is a common technique in the art as it offers pixel-level precision to draw out a clearer shape. Using this technique to identify the license plate would result in the predicted outcome of obtaining position information of the license plate that is highly accurate with very little background pixels. Regarding claim 15, the claim describes a method performing the steps for the function of the electronic device as disclosed in claim 7. Therefore, method claim 15 corresponds to the electronic device disclosed in claim 7 and is rejected for the same reasons obviousness as used above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID V. NGUYEN whose telephone number is (571)272-6111. The examiner can normally be reached on Mon-Fri from 8:30am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID VAN NGUYEN/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573160
INTIMACY-BASED MASKING OF THREE DIMENSIONAL (3D) FACE LANDMARKS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month