DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)- (d), which papers have been placed of record in the file.
Specification
The amended (substitute) specification filled on 01/05/2026 has been entered and made of record.
Status of Claims
This Office Action is in response to the amendment filed on 01/05/2026. Claims 1-2 and 12-13 are currently amended. Claims 3-11 and 14-20 are cancelled. Claims 21-36 are new. Accordingly, claims 1-2, 12-13, and 21-36 are pending in this application.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 11/19 /2024,11/25/2024, 05/30/2025, and 08/14/2025 have been filed in accordance with the provisions of 37 CFR 1.97. Accordingly, they are being considered by the examiner.
Response to Amendments
Applicant’s Amendment filed on January 05, 2025 has been entered and made of record
Response to Arguments
Applicant’s arguments filed on 01/05/2025 (see page 12-14), with respect to claims 1 and 12 have been fully considered, but are not persuasive.
Applicant’s claim is directed to determining, based on a probability estimation of a feature element, whether to skip performing entropy encoding on a first feature element.
Mokrushin discloses determining, based on a probability estimation of a syntactic element, whether to skip performing entropy encoding. Substituting the syntactic element disclosed by Mokrushin with a feature element does not alter the underlying process. The decision mechanism using probability estimation to determine whether entropy encoding should be skipped remains the same.
Accordingly, the claimed invention and Mokrushin operate on substantially the same subject matter, in substantially the same way, to achieve substantially the same result. A mere substitution of one known element for another performing the same function is considered an obvious modification. See, e.g., In re Fout, 675 F.2d 297, 301 (C.C.P.A. 1982) (holding that the substitution of one equivalent element for another does not render the claimed invention patentable); see also KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (stating that when a known technique has been used to improve one device, and a person of ordinary skill would recognize it would improve similar devices in the same way, the application of that technique is obvious).
Therefore, replacing the syntactic element disclosed by Mokrushin with a feature element would have been an obvious design choice to one of ordinary skill in the art.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C.
102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction
of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status
The following is a quotation of 35 U.S.C. 103 which forms the basis for all
obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention
is not identically disclosed as set forth in section 102, if the differences between the claimed
invention and the prior art are such that the claimed invention as a whole would have been obvious
before the effective filing date of the claimed invention to a person having ordinary skill in the art
to which the claimed invention pertains. Patentability shall not be negated by the manner in which
the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35
U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness
Claims 1-2, 12-13, 21-23, 26-32, and 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (US 20200374522 A1) hereinafter “Zhou” in view of Mokrushin et al (US-10127913-B1) hereinafter “Mokrushin”.
Regarding Claim 1 Zhou-Mokrushin
Zhou discloses 1. A method (Zhou, Fig. 9; [0157] “an image coding method. FIG. 9 is a schematic diagram of a process of the image coding method …”) comprising:
obtaining to-be-encoded feature data comprising a plurality of feature elements of a picture feature map or an audio feature variable, wherein the plurality of feature elements comprise a first feature element; (Zhou, [0158] “901: feature extraction is performed on to-be-processed image data by using a convolutional neural network, to generate feature maps of the image data;” [0100] “…the preprocessed data include a plurality of data within a specified range centered on the current data to be coded.”)
obtaining a probability estimation result of the first feature element; (Zhou , Fig. 11; [0189] “a probability estimator 1104 configured to calculate probabilities of to-be-coded data in the discrete feature maps according to the preprocessed data;” [0198] “the probability estimator 1104 may further calculate the probabilities of the data to be coded in the discrete feature maps according to the flag data generated by the data generator…when it is indicated by the flag data that the data of a channel in the discrete feature maps are not all zero, the probabilities of the data to be coded of the channel in the discrete feature maps are calculated according to the preprocessed data,…”)
determining, based on the probability estimation result of the first feature element, whether to perform entropy encoding needs to be performed on the first feature element; (Zhou, Fig. 9, “1105” [0190] “an entropy coder 1105 configured to perform entropy coding on the to-be-coded data according to the probabilities of the to-be-coded data” [0196] “…when the flag data indicate that data of a channel in the discrete feature maps are not all zero, the entropy coder performs entropy coding on the to-be-coded data in the channel according to the probabilities of the to-be-coded data.”) and
Zhou does not explicitly disclose
performing the entropy encoding on the first feature element when determining that determined that the entropy encoding needs to be performed on the first feature element. and
skipping performing the entropy encoding on the first feature element when determining that the entropy encoding does not need to be performed on the first feature element. and
However, in the same field of endeavor Mokrushin discloses more explicitly the following:
performing the entropy encoding on the first feature element when determining that determined that the entropy encoding needs to be performed on the first feature element. (Mokrushin, Col. 17, lines 22-26 “If the probability thus obtained is in the range that is predefined with pre-established values and/or the counter of a context occurrence number has a value greater or smaller than the predefined value, then entropy encoding is not carried out,..” i.e., the context model group processor (711) determines, based on the obtained probability, weather entropy encoding should be executed. When the obtained probability satisfies the predefined condition, entropy encoding it is performed.)
skipping performing the entropy encoding on the first feature element when determining that the entropy encoding does not need to be performed on the first feature element. (Mokrushin Col. 17, lines 22-26 “if the probability thus obtained is in the range that is predefined with pre-established values and/or the counter of a context occurrence number has a value greater or smaller than the predefined value, then entropy encoding is not carried out,..” i.e., When the obtained probability does not satisfy the predefined condition (or falls within a predefined bypass range, entropy encoding is not carried out and the element is passed without entropy .)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Zhou in view of Mokrushin to create the system of Zhou as outlined above in order to perform the entropy encoding on a first feature element only when it is determined, based on the probability estimation, that the entropy encoding needs to be performed, and skips entropy encoding when it is determined that entropy encoding does not need to be performed, as suggested by Mokrushin.
One ordinary skill in the art would have been motivated to incorporate Mokrushin’s conditional encoding control into Zhou’s feature data processing system in order to avoid unnecessary entropy operations when probability thresholds or context occurrence values indicate that entropy coding would have not provide sufficient benefit. Such modification would predictably reduce computational complexity and improve coding efficiency while maintaining coding accuracy. (Mokrushin, Col, 2, lines 41-47)
Note: The motivation that was utilized in the rejection of claim 1, applies equally as well to claims 2, 12-13, 21-23, 26-32, and 35-36.
Regarding Claim 2 Zhou-Mokrushin
The independent claim 2 recites a limitation that are substantially the similar to those of independent claim 1, except that claim 2 is directed to decoder rather than an encoder. It is well established in the art that video compression systems comprise complementary components, namely encoder (compressor) and a decoder (decompressor), which perform reciprocal operations. The encoder compresses source data to reduce the bit rate for transmission or storage, while the decoder reconstructs the data from the compressed bitstream by performing a corresponding inverse process.
Regarding Claim 3-11
3-11. (Cancelled)
Regarding Claim 12 Zhou-Mokrushin
Zhou-Mokrushin discloses 12. (Currently Amended) An apparatus, (Zhou, [0060] “FIG. 5 …image coding apparatus…”) comprising:
a memory (Zhou, Fig. 14 “1520” Memory) configured to store instructions; and
one or more processors (Zhou, Fig. 15 “1510” Processor) coupled to the memory and configured to execute the instructions to: (Zhou, [0240] “FIG. 15 is a schematic diagram of a structure of the electronic device of the embodiment of this disclosure. As shown in FIG. 15, an electronic device 1500 may include a processor (such as a central processing unit (CPU)) 1510 and a memory 1520, the memory 1520 being coupled to the central processing unit 1510. The memory 1520 may store various data, and furthermore, it may store a program for information processing, and execute the program under control of the processor 1510.”)
The remaining limitations of claim 12 are substantially similar to the in independent claim 1.Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 12.
Regarding Claim 13 Zhou-Mokrushin
Claim 13 recites limitations that are substantially similar to those of independent claim 12, differing only in that claim 13 is directed to a decoder rather than an encoder. Therefore, the rejection set forth for claim 12 equally applicable to claim 13. Furthermore, with respect to the limitation of “obtaining a bitstream of to-be-decoded feature data comprising…” (Mokrushin, Col. 1, lines 57-60 “FIG. 4 shows a typical decoder of compressed data. An input bitstream (401) is transferred to an input of one or more entropy decoders (402) that transfer decoded data (403) to an input of a syntactic decoder (404).”)
Regarding Claim 14-20
14-20. (Cancelled)
Regarding Claim 21 Zhou-Mokrushin
Zhou-Mokrushin discloses 21. (New) The apparatus of claim 13,
wherein the one or more processors (Mokrushin, Col. 17, lines 59-60 “…a processor for executing commands,”) are further configured to execute the instructions to:
determine that the entropy decoding needs to be performed for the first feature element when the probability estimation result of the first feature element meets a preset condition; (Mokrushin, Col. 4, lines 10-15 “The data on the probability is extracted from the selected cell of the selected context model, which data on the probability is used for entropy decoding of a current bit in the data stream and/or for selecting a mode of direct extraction of decoded bits from the data stream;”) and
determine that the entropy encoding does not need to be performed for the first feature element when the probability estimation result of the first feature element does not meet the preset condition. (Mokrushin, Col. 4, lines 10-15 {At least one syntactic element, depending on the value of at least one context element associated therewith, and/or depending on calculated values of the probability, and/or depending on a value of the individual counter of a context occurrence number may be written into the data stream directly, bypassing the step of encoding.”)
Regarding Claim 22 Zhou-Mokrushin
Zhou-Mokrushin discloses 22. (New) The method of claim 1, wherein determining whether entropy encoding needs to be performed on the first feature element comprises:
determining that the entropy encoding needs to be performed on the first feature element when the probability estimation result of the first feature element meets a preset condition; (Mokrushin, Col. 11, lines 47-50 “The probability thus obtained is used for encoding of the current binarized bit by the entropy coder (713). The encoded data is transferred to an output data stream (714).” Further, Col. 17, lines 22-26 “if the probability thus obtained is in the range that is predefined with pre-established values…, then entropy encoding is not carried out,…”)
determining that the entropy encoding does not need to be performed on the first feature element when the probability estimation result of the first feature element does not meet the preset condition. (Col. 17, lines 22-26 “if the probability thus obtained is in the range that is predefined with pre-established values and/or the counter of a context occurrence number has a value greater or smaller than the predefined value, then entropy encoding is not carried out,…”)
Regarding Claim 23 Zhou-Mokrushin
Zhou-Mokrushin discloses 23. (New) The method of claim 22,
wherein the probability estimation result of the first feature element comprises a parameter of a probability distribution of the first feature element, and wherein the preset condition is that the parameter is greater than or equal to a first threshold. (Mokrushin, Col. 23, lines 38-49. “The context model group processor (1804) … reads off the probability. If the probability thus obtained is within the range set by predefined values, and/or the context counter has a value which is greater or smaller than a predefined value, then entropy decoding is not carried out,…”)
Regarding Claim 26 Zhou-Mokrushin
Claim 26 recites limitations that are substantially similar to those of independent claim 21, differing only in that claim 26 is directed to a decoder rather than an encoder. Therefore, the rejection set forth for claim 21 equally applicable to claim 26.
Regarding Claim 27 Zhou-Mokrushin
Zhou-Mokrushin discloses 27. (New) The method of claim 26,
wherein the probability estimation result of the first feature element comprises a parameter of a probability distribution of the first feature element, and wherein the preset condition is that the parameter is greater than or equal to a first threshold. (Mokrushin, Col. 23, lines 38-49. “The context model group processor (1804) … reads off the probability. If the probability thus obtained is within the range set by predefined values, and/or the context counter has a value which is greater or smaller than a predefined value, then entropy decoding is not carried out,…”)
Regarding Claim 28 Zhou-Mokrushin
Zhou-Mokrushin discloses 28. (New) The method of claim 2, further comprising setting a value of the first feature element to a result of the entropy decoding performed on the bitstream for the first feature element after performing the entropy decoding. (Zhou, [0016] “performing entropy decoding on the to-be-decoded data according to the probabilities, to obtain feature maps;” [0152] “…first, entropy decoding is performed … on the code stream b1 … to obtain discrete latent image representations ŷ (i.e. the feature maps); “)
Regarding Claim 29 Zhou-Mokrushin
Zhou-Mokrushin discloses 29. (New) The method of claim 2, further comprising setting a value of the first feature element to k when skipping performing the entropy decoding on the bitstream for the first feature element, wherein k is an integer, and wherein k is one of a plurality of candidate values of the first feature element. (Zhou, [0149] “…in performing entropy coding on the data to be decoded according to the flag data and the probabilities of the data to be decoded, for example, when the flag data indicate that data of a channel in the feature maps are all zero, all-zero feature maps are generated for the channel,…”)
Regarding Claim 30 Zhou-Mokrushin
Zhou-Mokrushin discloses 30. (New) The method of claim 29,
where k is 0. (Zhou, [0149] “…all-zero feature maps are generated for the channel...”)
Regarding Claim 31 Zhou-Mokrushin
Zhou-Mokrushin discloses 31. (New) The method of claim 26,
wherein the probability estimation result of the first feature element is a probability value that the value of the first feature element is k, and wherein the preset condition is that the probability value is less than or equal to a second threshold. (Mokrushin, Col. 17, lines 18-26 “The context model group processor (711) uses values of the counters of a context occurrence number in the corresponding cells of the corresponding context models for selecting the current context model…. If the probability thus obtained is in the range that is predefined with pre-established values and/or the counter of a context occurrence number has a value greater or smaller than the predefined value, then entropy encoding is not carried out,..”)
Regarding Claim 32 Zhou-Mokrushin
Zhou-Mokrushin 32. (New) The method of claim 27,
wherein the probability distribution is a Gaussian distribution of the first feature element, a first parameter of the probability distribution is a mean value of the Gaussian distribution, and a second parameter of the probability distribution is a variance of the Gaussian distribution, or wherein the probability distribution is a Laplace distribution of the first feature element, a first parameter of the probability distribution is a location parameter of the Laplace distribution, and a second parameter of the probability distribution is a scale parameter of the Laplace distribution. (Zhou, [0091] “…, in 104, calculation of any type of probability model may be performed; for example, a Gaussian probability model, a Laplacian probability model, or the like, may be used. Taking the Gaussian probability model as an example, the Gaussian probability model may be obtained by estimating a mean value and variance or standard deviation of Gaussian distribution. [0124] “…taking Gaussian distribution as an example, a probability model includes a mean value μ and a standard deviation σ of Gaussian distribution, thereby obtaining the probabilities … of the data to be coded; a flag data generator generates flag data flag indicating whether the discrete latent image representations ŷ are all zero on channels, and an arithmetic coder performs entropy coding on the discrete latent image representations ŷ according to the probabilities…” [0136] “…In 602, calculation of any type of probability model may be performed; for example, a Gaussian probability model, a Laplacian probability model, or the like, may be used.”)
Regarding Claim 35 Zhou-Mokrushin
Zhou-Mokrushin discloses 35. (New) The method of claim 2,
wherein the determining whether the entropy decoding needs to be performed for the first feature element (Mokrushin, Col. 4, lines 10-15 “The data on the probability is extracted from the selected cell of the selected context model, which data on the probability is used for entropy decoding of a current bit in the data stream ,,,;”) comprises:
inputting the probability estimation result of the first feature element into a generative network to obtain decision information of the first feature element; (Zhou, Fig. 11 “convolutional neural network coder 1101” [0186] a convolutional neural network coder 1101 configured to perform feature extraction on to-be-processed image data by using a convolutional neural network, so as to generate feature maps of the image data;” [0198] “…,the probability estimator 1104 may further calculate the probabilities of the data to be coded in the discrete feature maps according to the flag data generated by the data generator) and
determining, based on the decision information, whether the entropy decoding needs to be performed for the first feature element. (Zhou, Fig. 11 [0190] “…an entropy coder 1105 configured to perform entropy coding on the to-be-coded data according to the probabilities of the to-be-coded data” [0196] “…when the flag data indicate that data of a channel in the discrete feature maps are not all zero, the entropy coder performs entropy coding on the to-be-coded data in the channel according to the probabilities of the to-be-coded data.”)
Regarding Claim 36 Zhou-Mokrushin
Claim 36 recite limitations that are substantially similar to those of claim 21, except that claim 36 are directed to an apparatus rather than a method. Therefore, the rejection of claims 21 applies equally to claim 36.
Claim Rejections - 35 USC § 103
Claims 24 and 25 are rejected under U.S.C. 103 as being unpatentable over Zhou-Mokrushin in view of Ikonin et al (US-20230262243-A1) hereinafter “Ikonin”.
Regarding Claim 24 Zhou-Mokrushin-Ikonin
Zhou-Mokrushin discloses 24. (New) The method of claim 1, further comprising
Zhou-Mokrushin discloses does not disclose
writing a result of the entropy encoding performed on the first feature element into a bitstream after performing the entropy encoding.
However, in the same field of endeavor Ikonin discloses more explicitly the following:
writing a result of the entropy encoding performed on the first feature element into a bitstream after performing the entropy encoding. (Ikonin, [0176] “…if the channel presence flag is true, then in step 270, the channel data is written into the bitstream, which may further include encoding S280 of the channel data by an entropy encoder such as an arithmetic encoder.”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Zhou-Mokrushin in view of Ikonin to create the system of Zhou-Mokrushin as outlined above in order to “writing a result of the entropy encoding performed on the first feature element into a bitstream after performing the entropy encoding” as suggested by Ikonin.
The reasoning is that the technique “may provide for an improved efficiency in terms of rate and complexity of encoding and decoding. .).”(Ikonin,[0204])
Note: The motivation that was utilized in the rejection of claim 24, applies equally as well to claim 25.
Regarding Claim 25 Zhou-Mokrushin-Ikonin
Zhou-Mokrushin discloses 25. (New) The method of claim 1, further comprising
Zhou-Mokrushin does not explicitly disclose
skipping writing data to the bitstream when skipping performing the entropy encoding on the first feature element.
However, in the same field of endeavor Ikonin discloses more explicitly the following
skipping writing data to the bitstream when skipping performing the entropy encoding on the first feature element. ( Ikonin, [0176], “….If, on the other hand, the channel presence flag is false, then in step 270, the channel data is not written into the bitstream, i.e. the writing is bypassed (or skipped).”:)
Claim Rejections - 35 USC § 103
Claims 33 is rejected under U.S.C. 103 as being unpatentable over Zhou-Mokrushin in view of Przyborowski et al (US-20250181943-A1) hereinafter “Przyborowski”
Regarding Claim 33 Zhou-Mokrushin-Przyborowski
Zhou-Mokrushin discloses 33. (New) The method of claim 26,
Zhou-Mokrushin does not explicitly disclose
wherein the probability estimation result of the first feature element is from a Gaussian mixture distribution of the first feature element, and wherein the preset condition is:
a first sum of any variance of the Gaussian mixture distribution and a second sum of absolute values of differences between all mean values of the Gaussian mixture distribution and k is greater than or equal to a first threshold;
a difference between any mean value of the Gaussian mixture distribution and k is greater than or equal to a second threshold; or any variance of the Gaussian mixture distribution is greater than or equal to a third threshold.
However, in the same field of endeavor Przyborowski discloses more explicitly the following:
wherein the probability estimation result of the first feature element is from a Gaussian mixture distribution of the first feature element, (Przyborowski, [0016] “…a method and/or devices of an efficient Gaussian Mixture Model (GMM) distribution based approximation of a data set in a computing environment..”) and wherein the preset condition is:
a first sum of any variance of the Gaussian mixture distribution and a second sum of absolute values of differences between all mean values of the Gaussian mixture distribution and k is greater than or equal to a first threshold; (Przyborowski [0027] …, in one or more embodiments, the EM algorithm may include two operations, viz. expectation and maximization. Assuming that the GMM/GMM distribution 180 includes q components or q Gaussian distributions therein, in one or more embodiments, as part of the expectation operation, for each observation x.sub.j∈{x.sub.1, x.sub.2 . . . x.sub.l}, the probability that x.sub.j originates from the i.sup.th (i={1, 2 . . . q}) Gaussian distribution may be computed. In one or more embodiments, as part of the maximization operation, for each i={1, 2 . . . q}, parameters μ.sub.i, Σ.sub.i and w.sub.i (μ.sub.i may refer to a mean of the i.sup.th Gaussian distribution, Σ.sub.i may refer to a variance of the i.sup.th Gaussian distribution for a univariate form thereof or a covariance (e.g., in matrix form) of the i.sup.th Gaussian distribution for a multivariate form thereof, and w.sub.i may refer to a weight of the i.sup.th Gaussian distribution indicative of a probability that input data 170 x.sub.j belongs to the i.sup.th Gaussian distribution) that maximize an evidence lower bound for all x.sub.j given the aforementioned derived probabilities may be found. …, initial values for the mean of the i.sup.th Gaussian distribution may be obtained through a random guess and/or a heuristic approach such as one involving a k-means (or, q-means) clustering algorithm.”)
a difference between any mean value of the Gaussian mixture distribution and k is greater than or equal to a second threshold; or any variance of the Gaussian mixture distribution is greater than or equal to a third threshold. (Przyborowski, [0031] “…operation 204 may involve modifying input data 170 (e.g., the modified input data 170 is shown as modified data 196 in FIG. 1) by replacing, for each constituent Gaussian distribution of GMM distribution 180, numeric value(s) and/or vector(s) of the numeric values of input data 170 that differ in magnitude from a center of the each constituent Gaussian distribution by less than a threshold value (e.g., threshold 194 shown stored in memory 114.sub.1) with a mean value of input data 170, with a weight of the mean value being indicative of a cardinality (or frequency)… [0034] “criteria 302 may involve the numeric value(s) and/or the vector(s) of the numeric values of input data 170 being different in magnitude from the center of the each constituent Gaussian distribution by less than a numeric distance (e.g., threshold 194) and from the centers of other constituent Gaussian distributions (or, at least one other constituent Gaussian distribution) of GMM distribution 180 by more than another numeric distance (e.g., threshold 310)”)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Zhou-Mokrushin with Przyborowski to create the system of Zhou-Mokrushin as outlined above in order to incorporate a first sum of variance and a second sum of absolute values-mean differences of the Gaussian mixture distribution-each compared against respective threshold as suggested by Przyborowski.
The motivation is “for improved efficiency and/or accuracy in the generation of GMM distribution from input data ” (Przyborowski,[0023])
Claim Rejections - 35 USC § 103
Claim 34 is rejected under U.S.C. 103 as being unpatentable over Zhou-Mokrushin in view of Smolak et al (US-9903803-B2) hereinafter “Smolak”.
Regarding Claim 34 Zhou-Mokrushin-Smolak
Zhou-Mokrushin discloses 34. (New) The method of claim 26,
Zhou-Mokrushin does not explicitly disclose
wherein the probability estimation result of the first feature element is from an asymmetric Gaussian distribution of the first feature element, and wherein the preset condition is:
an absolute value of a difference between a mean value of the asymmetric Gaussian distribution and k is greater than or equal to a first threshold; a first variance of the asymmetric Gaussian distribution is greater than or equal to a second threshold; or a second variance of the asymmetric Gaussian distribution is greater than or equal to a third threshold.
However, in the same field of endeavor Smolak discloses more explicitly the following:
wherein the probability estimation result of the first feature element is from an asymmetric Gaussian distribution of the first feature element, and wherein the preset condition is:
an absolute value of a difference between a mean value of the asymmetric Gaussian distribution and k is greater than or equal to a first threshold; (Smolak, Col. 9, lines 45-49 Step 354 in which a batch-specific signal peak threshold is determined as a function of the batch-specific noise characteristic may involve setting the signal peak threshold based on the mean μ of the asymmetric Gaussian distribution fit plus an increment.” i.e., threshold are derived based on mean value)
a first variance of the asymmetric Gaussian distribution is greater than or equal to a second threshold; or a second variance of the asymmetric Gaussian distribution is greater than or equal to a third threshold. (Smolak, Col. 9, lines 50-55 “…peak threshold may be set to be the greater of the mean μ of the asymmetric Gaussian distribution fit plus three times the standard deviation σ of the asymmetric Gaussian distribution fit or the mean μ of the asymmetric Gaussian distribution …, whichever is greater (e.g., signal peak threshold is no smaller,,,) Smolak, Col. 10, lines 31-34” The asymmetric Gaussian distribution fit is also shown referenced by numeral 642, and integer multiples of the standard deviation σ away from the mean μ are indicated by the vertical dashed lines…” i.e., threshold are derived based on mean variance/standard deviation)
Therefore, it would have been obvious to a person having ordinary skill in the art before
the effective filing date of the application to modify the teachings of Zhou-Mokrushin in view of Smolak to create the system of Zhou-Mokrushin as outlined above in order to incorporate the use of an asymmetric distribution element for future element, wherein the thresholds are determined based on the mean and variance of the asymmetric Gaussian distribution
as taught by Smolak.
One of ordinary skill in the art would have been motivated to incorporate the asymmetric Gaussian distribution into the system Zhou-Mokrushin such that the “efficiency of the probability estimation may be improved. ”(Zhou,[0090])
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure.
Chelian et al. US-10133983-B1
McNair et al. US-10446273-B1
Wang et al. US-9972314-B2
Oboukhov et al. US-11811425-B2
PIAO et al. US-20240283935-A1
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASTEWAYE GETTU ZEWEDE whose telephone number is (703)756-1441. The examiner can normally be reached Mo-Fr 8:30 am to 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASTEWAYE GETTU ZEWEDE/Examiner, Art Unit 2481 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481