Prosecution Insights
Last updated: April 19, 2026
Application No. 18/671,337

IMAGE PROCESSING METHOD AND APPARATUS AND STORAGE MEDIUM

Non-Final OA §101§103
Filed
May 22, 2024
Examiner
WEI, XIAOMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Greater Shine Limited
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
28 granted / 34 resolved
+20.4% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
83.6%
+43.6% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-9 rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Regarding claim 8, it recites A method for picture processing, applied to a first processor, wherein the method comprises: packaging each set of picture data in at least one set of picture data to obtain at least one picture data packet, wherein each set of picture data comprises a plurality of frames of original pictures captured at a same moment with different exposure durations; and transmitting the at least one picture data packet to a second processor. MPEP 2106 III provides a flowchart for the subject matter eligibility test for product and processes. The claim analysis following the flowchart is as follows: Step 1: Is the claim to a process, machine, manufacture or composition of matter? Yes. It recites a method, which is a process. Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon? Yes. The limitation of claim 8 “packaging each set of picture data …..”, as drafted, under its broadest reasonable interpretation, is directed to a data gathering process without significantly more. The packaging limitation can be achieved by a human user in his/her mind. If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind but for the recitation of a generic method, then it falls within the “mental process” grouping of the abstract idea. Accordingly, the claim recites an abstract idea. Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The limitation in claim 8 “transmitting the at least one picture data packet ……” is an insignificant post solution activity. Therefore, this judicial exception is not integrated into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above, claim 8 recites the packaging and transmitting limitations identified as abstract idea above (mental process and insignificant post solution). Claim 8 further recites a first processor and a second processor, however, a human being can function as a first processor, another human being can function as a second processor. The limitation of the first and the second processors merely indicates a processing environment in which to apply a judicial exception, the courts have identified mere data gathering and transmitting as well-understood, routine, and conventional activity. See MPEP 2106.05(d). Therefore, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Therefore, claim 8 is not eligible subject matter under 35 USC 101. Regarding claim 9, it depends on claim 8, “The method of claim 8, further comprising: packaging, in each of the at least one picture data packet, a set of metadata corresponding to a set of picture data comprised in the respective picture data packet; or packaging a set of metadata corresponding to each set of picture data in the at least one set of picture data to obtain at least one metadata packet, and transmitting the at least one metadata packet to the second processor; or packaging partial data in a set of metadata corresponding to each set of picture data in the at least one set of picture data to obtain at least one partial data packet, and packaging, in each of the at least one picture data packet, data, which is not packaged to a partial data packet, in a set of metadata corresponding to a set of picture data comprised in the respective picture data packet.”. However, claim 9 only recites the packaging, transmitting limitations and the two processors, no additional limitation elements are recited, and therefore the answers to step 2A prong two and step 2B are no. Claim 9 is not eligible subject matter under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8-10 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouedraogo et al. (US 20210109970 A1), hereinafter as Ouedraogo. Regarding claim 1, Ouedraogo teaches A method for picture processing, applied to a picture processing apparatus, wherein the picture processing apparatus comprises a first processor (Ouedraogo teaches a camera as a picture processing apparatus, an encapsulating process in Figure 2, further teaches a computer processor can be used for various embodiments of the invention, paragraph [0210] “FIG. 2 illustrates the main steps of a process for encapsulating a series of images in one file using HEIF format. A camera may for instance apply this processing. The given example applies to the grouping of images captured by a camera according to different capture modes.”, paragraph [0834] “FIG. 9 is a schematic block diagram of a computing device 900 for implementation of one or more embodiments of the invention. The computing device 900 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 900 comprises a communication bus connected to: a central processing unit 901 , such as a microprocessor, denoted CPU;”) and a second processor (Ouedraogo teaches a decoder as the second processor, paragraph [0658-0659] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images……When the capture mode corresponds to a bracketing capture mode, the decoder notifies in a step 304 the player that the HEIF file contains a series of bracketing images.”); and the method comprises: packaging, by the first processor, each set of picture data in at least one set of picture data to obtain at least one picture data packet (Ouedraogo paragraph [0223] “Once a capture mode has been selected in step 201 , the processing loop composed of steps 202 , 203 , and 204 is applied. Until the end of the capture mode (for example by activating a specific options or buttons in the graphical or physical user interface of the device), the capturing device first captures an image in a step 203 and then encapsulates the encoded image in file format in a step 204 .”), and transmitting, by the first processor, the at least one picture data packet to the second processor (Ouedraogo teaches the computing device 900 to implement the encapsulation and decoding of HEIF file, further teaches a network interface 904 to transmit data packets, paragraph [0658] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images.” And paragraph [0838] “Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 901”), wherein each set of picture data comprises a plurality of frames of original pictures captured at a same moment with different exposure durations (Ouedraogo paragraph [0386] “the camera is configured to group the images of a shot consisting in a series of several images at different exposure levels, for example three …… In the resulting multi-image file, a chunk will contain a number of samples equal to the number of images, three in the example, taken during the shot.”); selecting, by the second processor, a target data packet from the at least one picture data packet (Ouedraogo teaches an HDR grouping type, further teaches parsing grouping information in the decoder and allowing user to select appropriate shot, paragraph [0658-0659] “In a step 301 the Grouping Information is parsed…….the Grouping Information signals that the set of images belong to a capture series group (the grouping type is equal to ‘case’)……. When the capture mode corresponds to a bracketing capture mode, the decoder notifies in a step 304 the player that the HEIF file contains a series of bracketing images. In such a case, the application provides a GUI interface that permits to view the different bracketing alternatives……. for auto exposure bracketing the exposure stop of each shot are displayed in a step 305 in order to allow a user to select the appropriate shot. Upon selection of the preferred exposure, the decoding device may modify the HEIF file to mark the selected image as “primary item”.” And paragraph [0233-0234] “Another possibility is to add a new grouping corresponding to another capture mode. For example, at capture-time an auto-exposure bracketing grouping is created. Then at capture edition time, an HDR grouping containing a subset of the auto-exposure bracketing grouping is created. This HDR grouping is the result of selecting some images of the auto-exposure bracketing grouping for generating an HDR image.”); and parsing, by the second processor, a set of picture data comprised in the target data packet, and fusing, by the second processor, the parsed set of picture data to generate a high dynamic range picture (Ouedraogo paragraph [0519] “the set of images from the capture series is further processed by the capturing device to generate a new image. For instance, the capture device may generate an HDR image from a series of images captured with auto exposure bracketing. In such a case, an item reference is signaled between the HDR item and the identifier of the group of the series of images in the auto exposure bracketing.”). Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Regarding claim 8, Ouedraogo teaches A method for picture processing, applied to a first processor (Ouedraogo teaches a camera as a picture processing apparatus, an encapsulating process in Figure 2, further teaches a computer processor can be used for various embodiments of the invention, paragraph [0210] “FIG. 2 illustrates the main steps of a process for encapsulating a series of images in one file using HEIF format. A camera may for instance apply this processing. The given example applies to the grouping of images captured by a camera according to different capture modes.”), wherein the method comprises: packaging each set of picture data in at least one set of picture data to obtain at least one picture data packet (Ouedraogo paragraph [0223] “Once a capture mode has been selected in step 201 , the processing loop composed of steps 202 , 203 , and 204 is applied. Until the end of the capture mode (for example by activating a specific options or buttons in the graphical or physical user interface of the device), the capturing device first captures an image in a step 203 and then encapsulates the encoded image in file format in a step 204 .”), wherein each set of picture data comprises a plurality of frames of original pictures captured at a same moment with different exposure durations (Ouedraogo paragraph [0386] “the camera is configured to group the images of a shot consisting in a series of several images at different exposure levels, for example three …… In the resulting multi-image file, a chunk will contain a number of samples equal to the number of images, three in the example, taken during the shot.”); and transmitting the at least one picture data packet to a second processor (Ouedraogo teaches the decoder as the second processor, the computing device 900 to implement the encapsulation and decoding of HEIF file, further teaches a network interface 904 to transmit data packets, paragraph [0658-0659] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images……. When the capture mode corresponds to a bracketing capture mode, the decoder notifies in a step 304 the player that the HEIF file contains a series of bracketing images.” And paragraph [0838] “Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 901”). Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Regarding claim 9, Ouedraogo teaches The method of claim 8, further comprising: and further teaches packaging, in each of the at least one picture data packet, a set of metadata corresponding to a set of picture data comprised in the respective picture data packet; or packaging a set of metadata corresponding to each set of picture data in the at least one set of picture data to obtain at least one metadata packet, and transmitting the at least one metadata packet to the second processor; or packaging partial data in a set of metadata corresponding to each set of picture data in the at least one set of picture data to obtain at least one partial data packet, and packaging, in each of the at least one picture data packet, data, which is not packaged to a partial data packet, in a set of metadata corresponding to a set of picture data comprised in the respective picture data packet (Ouedraogo teaches the HEIF file 101 as the picture data packet in Figure 1, meta box 102, moov box 103 and mdat box 104 are all included in HEIF 101, paragraph [0190-0192] “The ‘moov’ box is a file format box that contains ‘trak’ sub boxes, each ‘trak’ box describing a track, that is to say, a timed sequence of related samples…….This file contains a second box called ‘meta’ (MetaBox) 102 that is used to contain general untimed metadata including metadata structures describing the one or more still images…… The media data corresponding to these items is stored in the container for media data, the ‘mdat’ box 104 .”) Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Regarding claim 10, Ouedraogo teaches A method for picture processing, applied to a second processor (Ouedraogo paragraph [0658-0659] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images……When the capture mode corresponds to a bracketing capture mode, the decoder notifies in a step 304 the player that the HEIF file contains a series of bracketing images.”), wherein the method comprises: receiving at least one picture data packet transmitted by a first processor (Ouedraogo teaches the computing device 900 to implement the encapsulation and decoding of HEIF file, further teaches a network interface 904 to transmit data packets, paragraph [0658] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images.” And paragraph [0838] “Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 901”), and selecting a target data packet from the at least one picture data packet (Ouedraogo teaches an HDR grouping type, further teaches parsing grouping information in the decoder and allowing user to select appropriate shot, paragraph [0658-0659] “In a step 301 the Grouping Information is parsed…….the Grouping Information signals that the set of images belong to a capture series group (the grouping type is equal to ‘case’)……. When the capture mode corresponds to a bracketing capture mode, the decoder notifies in a step 304 the player that the HEIF file contains a series of bracketing images. In such a case, the application provides a GUI interface that permits to view the different bracketing alternatives……. for auto exposure bracketing the exposure stop of each shot are displayed in a step 305 in order to allow a user to select the appropriate shot. Upon selection of the preferred exposure, the decoding device may modify the HEIF file to mark the selected image as “primary item”.” And paragraph [0233-0234] “Another possibility is to add a new grouping corresponding to another capture mode. For example, at capture-time an auto-exposure bracketing grouping is created. Then at capture edition time, an HDR grouping containing a subset of the auto-exposure bracketing grouping is created. This HDR grouping is the result of selecting some images of the auto-exposure bracketing grouping for generating an HDR image.”), wherein each picture data packet is obtained by packaging a set of picture data, and each set of picture data comprises a plurality of frames of original pictures captured at a same moment with different exposure durations (Ouedraogo paragraph [0386] “the camera is configured to group the images of a shot consisting in a series of several images at different exposure levels, for example three …… In the resulting multi-image file, a chunk will contain a number of samples equal to the number of images, three in the example, taken during the shot.”); and parsing a set of picture data comprised in the target data packet, and fusing the parsed set of picture data to generate a high dynamic range picture (Ouedraogo paragraph [0519] “the set of images from the capture series is further processed by the capturing device to generate a new image. For instance, the capture device may generate an HDR image from a series of images captured with auto exposure bracketing. In such a case, an item reference is signaled between the HDR item and the identifier of the group of the series of images in the auto exposure bracketing.”). Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Regarding claim 17, Ouedraogo teaches the method of claim 1, and further teach A picture processing apparatus, comprising a memory and at least one processor, wherein the memory is connected with the at least one processor, and the at least one processor (Ouedraogo paragraph [0834-0836] “FIG. 9 is a schematic block diagram of a computing device 900 for implementation of one or more embodiments of the invention. The computing device 900 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 900 comprises a communication bus connected to: a central processing unit 901, such as a microprocessor, denoted CPU; a random access memory 902, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method according to embodiments of the invention”) is configured to perform the method of claim 1 (Please refer to claim 1 for details). Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Regarding claim 18, Ouedraogo teaches the method of claim 1, and further teach A non-transitory computer-readable storage medium, having stored thereon a computer program which, when executed by a processor (Ouedraogo paragraph [0170-0173] “there is provided a computer-readable storage medium storing instructions of a computer program for implementing a method according to the invention…….Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible, non-transitory carrier medium may comprise a storage medium” and paragraph [0843] “Such a software application, when executed by the CPU 901, causes the steps of the flowcharts of the invention to be performed.”), implements the method for picture processing of claim 1 (Please refer to claim 1 for details). Ouedraogo and the current application are in the same field of endeavor, namely image processing and computer graphics. Ouedraogo teaches using HEIF file to store sequence of images and metadata to reduce the size of data (Ouedraogo paragraph [0293] “This reduces the number of properties to describe and thus the HEIF file is more compact.”, paragraph [0494] “As can be seen from the above example, having the Image property association map allowing association of one or more property to a group of images or entities leads to a less verbose description of the image file.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiments of Ouedraogo to reduce the size of image sequence. Claim(s) 2-4, 7, 11-13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouedraogo et al. (US 20210109970 A1), hereinafter as Ouedraogo, in view of Lai et al. (IDS CN 111385475 A), hereinafter as Lai. The original and a machine translation of Lai are provided by the examiner. Regarding claim 2, Ouedraogo teaches The method of claim 1, wherein selecting, by the second processor, the target data packet from the at least one picture data packet comprises: but fails to teach when a photographing time stamp generated based on a photographing instruction is acquired, searching, by the second processor from the at least one picture data packet, for a picture data packet that a preset condition is met between a generation time stamp of a set of picture data comprised in the picture data packet and the photographing time stamp; and determining, by the second processor, the searched picture data packet as the target data packet. Lai teaches when a photographing time stamp generated based on a photographing instruction is acquired, searching, by the second processor from the at least one picture data packet, for a picture data packet that a preset condition is met between a generation time stamp of a set of picture data comprised in the picture data packet and the photographing time stamp; and determining, by the second processor, the searched picture data packet as the target data packet (Lai teaches the time of shooting request instruction as the photographing time stamp, further teaches selecting multiple raw images as the target data packet based on the preset condition. Ouedraogo teaches capture time can be stored as image item property in metadata, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the time stamp of Lai with the capture time of Ouedraogo. Ouedraogo paragraph [0433] “capture_time is the number of hundredth of seconds elapsed between the image item and a reference time from the capturing device. Using a value relative to an external reference enables to signal that two items were captured simultaneously (i.e. are synchronized) by giving them the same capture time.”. Lai paragraph [0101] “the processor 110 obtains the shooting time of each RAW image according to the timestamp of all RAW images in the storage area, selects multiple RAW images whose difference between the shooting time and the time when the shooting request instruction is received is less than a preset time difference, and outputs them. By selecting multiple RAW images for multi-frame fusion output, the target image obtained by multi-frame RAW image fusion can have higher clarity compared to the target image obtained from only one RAW image.”) Ouedraogo and Lai are in the same field of endeavor, namely image processing and computer graphics. Lai teaches a method of choosing an image from storage based on user’s input of time stamp in order to achieve certain degree of clarity and speeding up the shooting process (Lai paragraph [0098] “Therefore, the RAW image whose shooting time is closest to the time of receiving the photo request can be selected from the storage area and output. This ensures that the RAW image has a certain degree of clarity and speeds up the shooting process, thus improving the user experience.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lai with the method of Ouedraogo to improve user experience by providing certain degree of clarity and speeding up the shooting process. Regarding claim 3, Ouedraogo in view of Lai teach The method of claim 2, wherein searching, by the second processor, for the picture data packet that the preset condition is met between the generation time stamp of the set of picture data comprised in the picture data packet and the photographing time stamp comprises: and further teach acquiring, by the second processor, at least one set of metadata corresponding to the at least one set of picture data one-to-one (Ouedraogo paragraph [0192] “FIG. 1 illustrates an example of an HEIF file 101 that contains media data like one or more still images and possibly video or sequence of images. This file contains a first ‘ftyp’ box (FileTypeBox) 111 that contains an identifier of the type of file, (typically a set of four character codes). This file contains a second box called ‘meta’ (MetaBox) 102 that is used to contain general untimed metadata including metadata structures describing the one or more still images. This ‘meta’ box 102 contains an ‘iinf’ box (ItemInfoBox) 121 that describes several single images. Each single image is described by a metadata structure ItemInfoEntry also denoted items 1211 and 1212 . Each items has a unique 32-bit identifier item_ID. The media data corresponding to these items is stored in the container for media data, the ‘mdat’ box 104 .”), wherein each set of metadata comprises a picture frame number and a generation time stamp corresponding to a set of picture data (Ouedraogo teaches the item_order as the picture frame number and the capture_time as the generation time stamp of the picture, paragraph [0366-0368] “The syntax may be: aligned(8) class HDRProperty extends ItemProperty(‘hdr ’) {unsigned int(8) item_order; } with the following semantics: item_order is the ordering of the item inside its group.” and paragraph [0433] “capture_time is the number of hundredth of seconds elapsed between the image item and a reference time from the capturing device. Using a value relative to an external reference enables to signal that two items were captured simultaneously (i.e. are synchronized) by giving them the same capture time.”.); searching, by the second processor from the at least one set of metadata, for a set of metadata that the preset condition is met between a generation time stamp comprised in the set of metadata and the photographing time stamp (Lai paragraph [0101] “the processor 110 obtains the shooting time of each RAW image according to the timestamp of all RAW images in the storage area, selects multiple RAW images whose difference between the shooting time and the time when the shooting request instruction is received is less than a preset time difference, and outputs them. By selecting multiple RAW images for multi-frame fusion output, the target image obtained by multi-frame RAW image fusion can have higher clarity compared to the target image obtained from only one RAW image.”); and determining, by the second processor, a picture frame number comprised in the searched set of metadata as a target frame number, and searching, by the second processor, for a picture data packet associated with the target frame number from the at least one picture data packet (Ouedraogo teaches item_order as an ItemProperties, further teaches retrieving ItemProperties, paragraph [0659] “ the interface provides the information provided in the Property Information such as the ItemProperties to extract the characteristics of the capture associated to each image. In particular, for auto exposure bracketing the exposure stop of each shot are displayed in a step 305 in order to allow a user to select the appropriate shot. Upon selection of the preferred exposure, the decoding device may modify the HEIF file to mark the selected image as “primary item”.”). Ouedraogo and Lai are in the same field of endeavor, namely image processing and computer graphics. Lai teaches a method of choosing an image from storage based on user’s input of time stamp in order to achieve certain degree of clarity and speeding up the shooting process (Lai paragraph [0098] “Therefore, the RAW image whose shooting time is closest to the time of receiving the photo request can be selected from the storage area and output. This ensures that the RAW image has a certain degree of clarity and speeds up the shooting process, thus improving the user experience.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lai with the method of Ouedraogo to improve user experience by providing certain degree of clarity and speeding up the shooting process. Regarding claim 4, Ouedraogo in view of Lai teach The method of claim 3, wherein before acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one, the method further comprises: and further teach packaging, by the first processor in each of the at least one picture data packet, a set of metadata corresponding to a set of picture data comprised in the respective picture data packet (Ouedraogo teaches the HEIF file 101 as the picture data packet in Figure 1, meta box 102, moov box 103 and mdat box 104 are all included in HEIF 101); and acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing, by the second processor, from the at least one picture data packet to obtain the at least one set of metadata (Ouedraogo teaches the decoder to parse HEIF file, paragraph [0658] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images. In a step 301 the Grouping Information is parsed.” And paragraph [0743] “In a variant as illustrated on FIG. 5, the association of item properties 540 with an item 510 within the scope of a group 520 is performed directly in the ItemPropertiesBox 530 in MetaBox 50.”). Regarding claim 7, Ouedraogo in view of Lai teach The method of claim 3, wherein parsing, by the second processor, the set of picture data comprised in the target data packet comprises: and further teach acquiring, by the second processor, a set of metadata corresponding to a set of picture data comprised in the target data packet, and acquiring, by the second processor, parsing type information from the acquired set of metadata (Ouedraogo teaches the reference_type information in the metadata as the parsing type information, paragraph [0496] “the relationships between the series of images captured are described in one capture series as ItemReference. The ItemReferenceBox describes the reference between two items of an HEIF file and associates a type of Reference to each association through reference_type parameter. In this embodiment, one reference type is defined for each capture mode. The four character code of the reference_type to use for a given capture mode is the same as the one described for grouping_type.”); and determining, by the second processor, a picture parsing manner according to the parsing type information, and parsing, by the second processor, a set of picture data comprised in the target data packet according to the picture parsing manner (Ouedraogo paragraph [0498-0499] “Depending on the reference_type value, the HEIF parser determines the relationship between a first and a second item. For example, when reference_type is equal to:…… ‘hdr’: the second item is another item of the same HDR item set. ‘nois’: the second item is another item of the same noise reduction item set.”). Regarding claim 11, Ouedraogo teaches The method of claim 10, wherein selecting the target data packet from the at least one picture data packet comprises: but fails to teach when a photographing time stamp generated based on a photographing instruction is acquired, searching, from the at least one picture data packet, for a picture data packet that a preset condition is met between a generation time stamp of a set of picture data comprised in the picture data packet and the photographing time stamp; and determining the searched picture data packet as the target data packet. Lai teaches when a photographing time stamp generated based on a photographing instruction is acquired, searching, from the at least one picture data packet, for a picture data packet that a preset condition is met between a generation time stamp of a set of picture data comprised in the picture data packet and the photographing time stamp; and determining the searched picture data packet as the target data packet (Lai teaches the time of shooting request instruction as the photographing time stamp, further teaches selecting multiple raw images as the target data packet based on the preset condition. Ouedraogo teaches capture time can be stored as image item property in metadata, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the time stamp of Lai with the capture time of Ouedraogo. Ouedraogo paragraph [0433] “capture_time is the number of hundredth of seconds elapsed between the image item and a reference time from the capturing device. Using a value relative to an external reference enables to signal that two items were captured simultaneously (i.e. are synchronized) by giving them the same capture time.”. Lai paragraph [0101] “the processor 110 obtains the shooting time of each RAW image according to the timestamp of all RAW images in the storage area, selects multiple RAW images whose difference between the shooting time and the time when the shooting request instruction is received is less than a preset time difference, and outputs them. By selecting multiple RAW images for multi-frame fusion output, the target image obtained by multi-frame RAW image fusion can have higher clarity compared to the target image obtained from only one RAW image.”) Ouedraogo and Lai are in the same field of endeavor, namely image processing and computer graphics. Lai teaches a method of choosing an image from storage based on user’s input of time stamp in order to achieve certain degree of clarity and speeding up the shooting process (Lai paragraph [0098] “Therefore, the RAW image whose shooting time is closest to the time of receiving the photo request can be selected from the storage area and output. This ensures that the RAW image has a certain degree of clarity and speeds up the shooting process, thus improving the user experience.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lai with the method of Ouedraogo to improve user experience by providing certain degree of clarity and speeding up the shooting process. Regarding claim 12, Ouedraogo in view of Lai teach The method of claim 11, wherein searching for the picture data packet that the preset condition is met between the generation time stamp of the set of picture data comprised in the picture data packet and the photographing time stamp comprises: and further teach acquiring at least one set of metadata corresponding to the at least one set of picture data one-to-one (Ouedraogo paragraph [0192] “FIG. 1 illustrates an example of an HEIF file 101 that contains media data like one or more still images and possibly video or sequence of images. This file contains a first ‘ftyp’ box (FileTypeBox) 111 that contains an identifier of the type of file, (typically a set of four character codes). This file contains a second box called ‘meta’ (MetaBox) 102 that is used to contain general untimed metadata including metadata structures describing the one or more still images. This ‘meta’ box 102 contains an ‘iinf’ box (ItemInfoBox) 121 that describes several single images. Each single image is described by a metadata structure ItemInfoEntry also denoted items 1211 and 1212 . Each items has a unique 32-bit identifier item_ID. The media data corresponding to these items is stored in the container for media data, the ‘mdat’ box 104 .”), wherein each set of metadata comprises a picture frame number and a generation time stamp corresponding to a set of picture data (Ouedraogo teaches the item_order as the picture frame number and the capture_time as the generation time stamp of the picture, paragraph [0366-0368] “The syntax may be: aligned(8) class HDRProperty extends ItemProperty(‘hdr ’) {unsigned int(8) item_order; } with the following semantics: item_order is the ordering of the item inside its group.” and paragraph [0433] “capture_time is the number of hundredth of seconds elapsed between the image item and a reference time from the capturing device. Using a value relative to an external reference enables to signal that two items were captured simultaneously (i.e. are synchronized) by giving them the same capture time.”.); searching, from the at least one set of metadata, for a set of metadata that the preset condition is met between a generation time stamp comprised in the set of metadata and the photographing time stamp (Lai paragraph [0101] “the processor 110 obtains the shooting time of each RAW image according to the timestamp of all RAW images in the storage area, selects multiple RAW images whose difference between the shooting time and the time when the shooting request instruction is received is less than a preset time difference, and outputs them. By selecting multiple RAW images for multi-frame fusion output, the target image obtained by multi-frame RAW image fusion can have higher clarity compared to the target image obtained from only one RAW image.”); and determining a picture frame number comprised in the searched set of metadata as a target frame number, and searching for a picture data packet associated with the target frame number from the at least one picture data packet (Ouedraogo teaches item_order as an ItemProperties, further teaches retrieving ItemProperties, paragraph [0659] “ the interface provides the information provided in the Property Information such as the ItemProperties to extract the characteristics of the capture associated to each image. In particular, for auto exposure bracketing the exposure stop of each shot are displayed in a step 305 in order to allow a user to select the appropriate shot. Upon selection of the preferred exposure, the decoding device may modify the HEIF file to mark the selected image as “primary item”.”). Ouedraogo and Lai are in the same field of endeavor, namely image processing and computer graphics. Lai teaches a method of choosing an image from storage based on user’s input of time stamp in order to achieve certain degree of clarity and speeding up the shooting process (Lai paragraph [0098] “Therefore, the RAW image whose shooting time is closest to the time of receiving the photo request can be selected from the storage area and output. This ensures that the RAW image has a certain degree of clarity and speeds up the shooting process, thus improving the user experience.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lai with the method of Ouedraogo to improve user experience by providing certain degree of clarity and speeding up the shooting process. Regarding claim 13, Ouedraogo in view of Lai teach The method of claim 12, and further teach wherein each of the at least one picture data packet comprises a set of metadata corresponding to a set of picture data (Ouedraogo teaches the HEIF file 101 as the picture data packet in Figure 1, meta box 102, moov box 103 and mdat box 104 are all included in HEIF 101); and acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one picture data packet to obtain the at least one set of metadata (Ouedraogo teaches the decoder to parse HEIF file, paragraph [0658] “FIG. 3 illustrates the main steps of a parsing process of and HEIF file generated by the encapsulating process of FIG. 2. The decoding process starts by the parsing of an HEIF file with a series of images. In a step 301 the Grouping Information is parsed.” And paragraph [0743] “In a variant as illustrated on FIG. 5, the association of item properties 540 with an item 510 within the scope of a group 520 is performed directly in the ItemPropertiesBox 530 in MetaBox 50.”). Regarding claim 16, Ouedraogo teaches The method of claim 10, wherein parsing the set of picture data comprised in the target data packet comprises: and further teach acquiring a set of metadata corresponding to the set of picture data comprised in the target data packet, and acquiring parsing type information from the acquired set of metadata (Ouedraogo teaches the reference_type information in the metadata as the parsing type information, paragraph [0496] “the relationships between the series of images captured are described in one capture series as ItemReference. The ItemReferenceBox describes the reference between two items of an HEIF file and associates a type of Reference to each association through reference_type parameter. In this embodiment, one reference type is defined for each capture mode. The four character code of the reference_type to use for a given capture mode is the same as the one described for grouping_type.”); and determining a picture parsing manner according to the parsing type information, and parsing the set of picture data comprised in the target data packet according to the picture parsing manner (Ouedraogo paragraph [0498-0499] “Depending on the reference_type value, the HEIF parser determines the relationship between a first and a second item. For example, when reference_type is equal to:…… ‘hdr’: the second item is another item of the same HDR item set. ‘nois’: the second item is another item of the same noise reduction item set.”). Claim(s) 5 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouedraogo et al. (US 20210109970 A1), hereinafter as Ouedraogo, in view of Lai et al. (IDS CN 111385475 A), hereinafter as Lai, further in view of Li et al. (IDS CN 110062161 A), hereinafter as Li. The original and a machine translation of Lai and Li are provided by the examiner. Regarding claim 5, Ouedraogo in view of Lai teach The method of claim 3, wherein before acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one, the method further comprises: but fail to teach packaging, by the first processor, each set of metadata in the at least one set of metadata to obtain at least one metadata packet, and transmitting, by the first processor, the at least one metadata packet to the second processor; and acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one metadata packet to obtain the at least one set of metadata. Li teaches packaging, by the first processor, each set of metadata in the at least one set of metadata to obtain at least one metadata packet (Li teaches sending image data and metadata in two separate queues, paragraph [0031] “The camera service module 18 encapsulates the image data and metadata and transmits the encapsulated image data and metadata to the algorithm post-processing module 16…… two queues can be created in the process of the camera service module 18, one being an image data queue and the other a metadata queue.”), and transmitting, by the first processor, the at least one metadata packet to the second processor (Li paragraph [0031] “The image data and metadata of the same frame are transmitted to the algorithm post-processing module 16 through the two queues.”); and acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one metadata packet to obtain the at least one set of metadata (Li paragraph [0031] “The algorithm post-processing module 16 also has two corresponding queues. These two queues are created after the camera is started and are used to store image data and metadata respectively.” And paragraph [0050] “The post-processing module 16 uses image processing algorithms and processes image data according to metadata to achieve post-processing of the image.”). Ouedraogo, Lai and Li are in the same field of endeavor, namely image processing and computer graphics. Li teaches using two queues to transmit encapsulated image and metadata to improve efficiency and security (Li paragraph [0031] “by encapsulating the image through the camera service module 18, the efficiency of image transmission can be improved, and the security of image transmission can also be enhanced.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Li with the method of Ouedraogo and Lai to improve efficiency and security. Regarding claim 14, Ouedraogo in view of Lai teach The method of claim 12, wherein before acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one, the method further comprises: but fail to teach receiving at least one metadata packet transmitted by the first processor, wherein each of the at least one metadata packet is generated by packaging a respective set of metadata in the at least one set of metadata; and acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one metadata packet to obtain the at least one set of metadata. Li teaches receiving at least one metadata packet transmitted by the first processor, wherein each of the at least one metadata packet is generated by packaging a respective set of metadata in the at least one set of metadata (Li teaches sending image data and metadata in two separate queues, paragraph [0031] “The camera service module 18 encapsulates the image data and metadata and transmits the encapsulated image data and metadata to the algorithm post-processing module 16…… two queues can be created in the process of the camera service module 18, one being an image data queue and the other a metadata queue…… The image data and metadata of the same frame are transmitted to the algorithm post-processing module 16 through the two queues.”); and acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one metadata packet to obtain the at least one set of metadata (Li paragraph [0031] “The algorithm post-processing module 16 also has two corresponding queues. These two queues are created after the camera is started and are used to store image data and metadata respectively.” And paragraph [0050] “The post-processing module 16 uses image processing algorithms and processes image data according to metadata to achieve post-processing of the image.”). Ouedraogo, Lai and Li are in the same field of endeavor, namely image processing and computer graphics. Li teaches using two queues to transmit encapsulated image and metadata to improve efficiency and security (Li paragraph [0031] “by encapsulating the image through the camera service module 18, the efficiency of image transmission can be improved, and the security of image transmission can also be enhanced.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Li with the method of Ouedraogo and Lai to improve efficiency and security. Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouedraogo et al. (US 20210109970 A1), hereinafter as Ouedraogo, in view of Lai et al. (IDS CN 111385475 A), hereinafter as Lai, further in view of Oh et al. (US 20200084428 A1), hereinafter as Oh. The original and a machine translation of Lai are provided by the examiner. Regarding claim 6, Ouedraogo in view of Lai teach The method of claim 3, wherein before acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one, the method further comprises: but fail to teach packaging, by the first processor, partial data in each set of metadata in the at least one set of metadata to obtain at least one partial data packet, and packaging, by the first processor in each of the at least one picture data packet, data, which is not packaged to a partial data packet, in the set of metadata corresponding to a set of picture data comprised in the respective picture data packet; and transmitting, by the first processor, the at least one partial data packet to the second processor; and acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing, by the second processor, from the at least one partial data packet and the at least one picture data packet to obtain the at least one set of metadata. Oh teaches packaging, by the first processor, partial data in each set of metadata in the at least one set of metadata to obtain at least one partial data packet, and packaging, by the first processor in each of the at least one picture data packet, data, which is not packaged to a partial data packet, in the set of metadata corresponding to a set of picture data comprised in the respective picture data packet (Oh teaches transmitted portion of the metadata using a signaling table, further teaches transmitting remaining portion of the metadata in a file format, paragraph [0160-0161] “The 360-degree-video-related metadata may be transmitted while being included in a separate signaling table, or may be transmitted while being included in DASH MPD, or may be transmitted while being included in the form of a box in a file format of ISOBMFF…….In some embodiments, a portion of the metadata, a description of which will follow, may be transmitted while being configured in the form of a signaling table, and the remaining portion of the metadata may be included in the form of a box or a track in a file format.”); and transmitting, by the first processor, the at least one partial data packet to the second processor (Oh paragraph [0092] “The transmission unit may transmit the transmission-processed 360-degree video data and/or the 360-degree-video-related metadata through the broadcast network and/or the broadband connection. The transmission unit may include an element for transmission through the broadcast network and/or an element for transmission through the broadband connection.”); and acquiring, by the second processor, the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing, by the second processor, from the at least one partial data packet and the at least one picture data packet to obtain the at least one set of metadata (Oh paragraph [0102-0103] “The reception-processing unit may deliver the acquired 360-degree video data to the decapsulation-processing unit, and may deliver the acquired 360-degree-video-related metadata to the metadata parser. The 360-degree-video-related metadata, acquired by the reception-processing unit, may have the form of a signaling table…….The decapsulation-processing unit may decapsulate the 360-degree video data, received in file form from the reception-processing unit. The decapsulation-processing unit may decapsulate the files based on ISOBMFF, etc. to acquire 360-degree video data and 360-degree-video-related metadata. “). Ouedraogo, Lai and Oh are in the same field of endeavor, namely image processing and computer graphics. Oh teaches a method of transmitting video data and metadata to improve efficiency (Oh paragraph [0028] “According to the present invention, it is possible to propose a method of efficiently increasing transmission capacity and transmitting necessary information at the time of transmitting 360-degree content.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Oh with the method of Ouedraogo and Lai to improve efficiency. Regarding claim 15, Ouedraogo in view of Lai teach The method of claim 12, wherein before acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one, the method further comprises: but fail to teach receiving at least one partial data packet transmitted by the first processor, wherein the at least one partial data packet is generated by packaging partial data in each of the at least one set of metadata; and each of the at least one picture data packet comprises data, which is not packaged to the partial data packet, in the set of metadata corresponding to a set of picture data comprised in the respective picture data packet; and acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one partial data packet and the at least one picture data packet to obtain the at least one set of metadata. Oh teaches receiving at least one partial data packet transmitted by the first processor, wherein the at least one partial data packet is generated by packaging partial data in each of the at least one set of metadata; and each of the at least one picture data packet comprises data, which is not packaged to the partial data packet, in the set of metadata corresponding to a set of picture data comprised in the respective picture data packet (Oh teaches transmitted portion of the metadata using a signaling table, further teaches transmitting remaining portion of the metadata in a file format, paragraph [0160-0161] “The 360-degree-video-related metadata may be transmitted while being included in a separate signaling table, or may be transmitted while being included in DASH MPD, or may be transmitted while being included in the form of a box in a file format of ISOBMFF…….In some embodiments, a portion of the metadata, a description of which will follow, may be transmitted while being configured in the form of a signaling table, and the remaining portion of the metadata may be included in the form of a box or a track in a file format.”); and acquiring the at least one set of metadata corresponding to the at least one set of picture data one-to-one comprises: parsing from the at least one partial data packet and the at least one picture data packet to obtain the at least one set of metadata (Oh paragraph [0102-0103] “The reception-processing unit may deliver the acquired 360-degree video data to the decapsulation-processing unit, and may deliver the acquired 360-degree-video-related metadata to the metadata parser. The 360-degree-video-related metadata, acquired by the reception-processing unit, may have the form of a signaling table…….The decapsulation-processing unit may decapsulate the 360-degree video data, received in file form from the reception-processing unit. The decapsulation-processing unit may decapsulate the files based on ISOBMFF, etc. to acquire 360-degree video data and 360-degree-video-related metadata. “). Ouedraogo, Lai and Oh are in the same field of endeavor, namely image processing and computer graphics. Oh teaches a method of transmitting video data and metadata to improve efficiency (Oh paragraph [0028] “According to the present invention, it is possible to propose a method of efficiently increasing transmission capacity and transmitting necessary information at the time of transmitting 360-degree content.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Oh with the method of Ouedraogo and Lai to improve efficiency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tanabe et al. (US 20190356893 A1) teaches a method of display images in a file with the dynamic range of a display device. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOMING WEI/Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603064
CIRCUIT AND METHOD FOR VIDEO DATA CONVERSION AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597246
METHOD AND APPARATUS FOR GENERATING ADVERSARIAL PATCH
2y 5m to grant Granted Apr 07, 2026
Patent 12597175
Avatar Creation From Natural Language Description
2y 5m to grant Granted Apr 07, 2026
Patent 12586280
TECHNIQUES FOR GENERATING DUBBED MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12586318
METHOD AND APPARATUS FOR LABELING ROAD ELEMENT, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+26.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month