DETAILED ACTION
This office action is in response to Applicant’s Amendment/Request for Reconsideration, received on 11/28/2025. Claims 1, 4, 7-9, 13 have been amended. Claims 3 and 16 have been cancelled. Claims 1-2, 4-15 are pending and have been considered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see pg. 5, filed 11/28/2025, with respect to objections of claims 8-9, 13 have been fully considered and are persuasive. The objections of claims 8-9 and 13 have been withdrawn.
Applicant’s arguments, see pg. 5, filed 11/28/2025, with respect to “rejections under 35 U.S.C. 112(b) concerning antecedent basis” have been fully considered and are persuasive. The rejections of claims 1-2, 4-15 have been withdrawn.
Applicant's arguments filed 11/28/2025, see pgs. 5-6, with respect to claim interpretation under 35 U.S.C. 112(f) have been fully considered but they are not persuasive.
Applicant’s representative asserts, “Claims 1-16 stand rejected under 35 U.S.C. §112(a) and §112(b) based on the Examiner's interpretation of 'redaction system' as a means-plus-function limitation under 35 U.S.C. 112(f). Applicant respectfully traverses this interpretation and the corresponding rejections. The presumption that § 112(f) does not apply to a claim term that does not use the word 'means' is not overcome here. The term 'redaction system' is not a generic placeholder or nonce term but is understood by a Person of Ordinary Skill in the Art (POSITA) to connote a class of structures-specifically, a combination of hardware and software components configured for redacting information. The specification provides ample structural context for this term, preventing the application of §112(f). Figure 1 and its corresponding description at paragraphs [0011]-[0020] disclose a concrete structural arrangement for the 'redaction system 100.' This includes specific components such as a 'transcription engine 106,' a 'redaction engine 110,' a 'voice communication database 105,' and a 'transcription database 108.' These are not mere functional blocks but are recognized structural components in the art of automated voice processing. The specification further describes the interaction and operation of these structural components to perform the claimed method steps. For example, paragraph [0012] describes the 'transcription engine 106' generating a transcript, and paragraph [0014] describes the 'redaction engine 110' using triggers to detect sensitive data. This disclosure provides sufficient structure to inform the meaning of 'redaction system' and rebuts the presumption that it is a generic placeholder. Because 'redaction system' is not a § 112(f) limitation, the heightened analysis for indefiniteness under §112(b) and for written description/enablement under §112(a) is inapplicable. The specification's description of the functions performed by the various engines and their interaction with the databases, as detailed throughout paragraphs [0012]- [0023], is sufficient to enable a POSITA to make and use the invention without undue experimentation. The level of detail is commensurate with the predictable nature of the software arts. The claims are definite, and the specification provides adequate written description and enablement.
Therefore, Applicant respectfully submits that the term 'redaction system' is not subject to interpretation under 35 U.S.C. § 112(f). Accordingly, the rejections of claims 1-16 under §112(a) and §112(b) based on this interpretation are respectfully traversed, and their withdrawal is requested.”
Examiner respectfully disagrees. To determine whether a word, term, or phrase coupled with a function denotes structure, examiners may check whether:
(1) the specification provides a description sufficient to inform one of ordinary skill in the art that the term denotes structure;
(2) general and subject matter specific dictionaries provide evidence that the term has achieved recognition as a noun denoting structure; and/or
(3) the prior art provides evidence that the term is an art-recognized structure to perform the claimed function.
(see MPEP 2181)
Applicant has focused on (1), asserting that " The term 'redaction system’ is not a generic placeholder or nonce term but is understood by a Person of Ordinary Skill in the Art (POSITA) to connote a class of structures—specifically, a combination of hardware and software components configured for redacting information. The specification provides ample structural context for this term, preventing the application of § 112(f).”
Applicant’s assertions and citations to the specification are not directed to any explicit definition of the term “redaction system.” Rather, the portions cited focus on the functionality that can or may be performed in an open ended or exemplary manner. Examiner submits that these elements lack definite structure as the specification treats these components purely as a black box.
This further evidenced by virtue of applicant’s own assertion that these are directed to a class of structures, “...specifically a combination of hardware and software components configured for redacting information.,” without providing the requisite software, flow charts, algorithms or the like.
Applicant has not asserted or provided evidence directed to either (2) or (3) and the examiner has not identified the requisite evidence in either (2) or (3), thus these elements are moot at the present time.
Applicant's arguments filed 11/28/2025, see pg. 7, with respect to the rejection of claims 7, 9 under 112(a), have been fully considered but they are not persuasive.
Applicant’s representative asserts, “Regarding claim 7, the limitation 'primary trigger... checks for sensitive content by identifying a first detection event' is supported by paragraph [0014], which states, 'Redaction engine uses a series of primary triggers to detect the start of the personally identifiable or sensitive voice data.' This 'detection of the start' is the 'first detection event.' The limitation 'analyzing audio data... for a first predetermined time period after the first detection event' is supported by the disclosure that a 'primary trigger moves forward in time from the point of activation' (specification, par. [0014]) and that the time periods for the duration of each trigger are configurable (specification, par. [0017]).”
In response, with regard to claim 7, the examiner respectfully disagrees. [0014] of the instant application discloses primary triggers for detecting the start of personally identifiable or sensitive voice data. This does not disclose the trigger itself to be performing the claimed operation. Instead, the examiner believes the redaction engine is performing the checking, based upon whether or not primary triggers have been placed. Should the claim be amended to shift the focus of checking/analysis to operations performed by the redaction engine/system, there would appear to be proper written description. As this is not currently the case, the claim remains rejected. The examiner would like to note that the secondary basis for rejection with regard to the “analyzing audio data…for a first predetermined time…” has been withdrawn as Applicant has cited significant written description.
With regard to claim 9, the examiner respectfully disagrees. [0015] of the instant application discloses secondary triggers for detecting the end of personally identifiable or sensitive voice data. This does not disclose the trigger itself to be performing the claimed operation. Instead, the examiner believes the redaction engine is performing the checking, based upon whether or not secondary triggers have been placed. Should the claim be amended to shift the focus of checking/analysis to operations performed by the redaction engine/system, there would appear to be proper written description. As this is not currently the case, the claim remains rejected. The examiner would like to note that the secondary basis for rejection with regard to the “analyzing audio data…for a first predetermined time…” has been withdrawn as Applicant has cited significant written description.
Applicant’s arguments, see pg. 7, filed 11/28/2025, with respect to claim rejections of claims 12-13 under 35 U.S.C. 112(a) have been fully considered and are persuasive. The rejections of claims 12 and 13 have been withdrawn.
Applicant's arguments filed 11/28/2025, see pgs. 7-8, with respect to “Claim Rejections – 35 USC Section 101” have been fully considered but they are not persuasive.
Applicant’s representative asserts, “Under the framework of Alice Corp. v. CLS Bank Int'l, 573 U.S. 208 (2014), the claims are not directed to a judicial exception. The Examiner alleges that the claimed method can be performed in the mind or with pen and paper. This characterization is a significant oversimplification that disregards the specific technical features of the claims and misapplies the mental steps doctrine. The claims are directed to a specific improvement in computer- implemented redaction technology, not a fundamental concept or mental process. Under Alice Step 2A, the claims are directed to a specific technical improvement in the functioning of a computer, not an abstract idea. See Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016). The claimed invention solves a technical problem inherent in prior art automated redaction systems: the high rate of false positives and incomplete redactions, which result in either removing valuable non-sensitive data or failing to remove all sensitive data (specification, par. [0004]-[0005]). The claimed solution is a specific, two-tiered trigger architecture comprising distinct primary and secondary triggers. This architecture is a computer-centric solution that improves the accuracy and reliability of the redaction process itself. The Examiner's analogy of a person listening to audio and marking a transcript fails to account for the claimed system's specific logic, such as the distinct operational modes of the primary and secondary triggers (specification, par. [0014], [0015]). This is not a generic application of an abstract idea but a specific implementation of a technical solution to a technical problem.”
In response, the examiner would like to refer to the specific claim language of independent claim 1 as currently amended. Specifically, the examiner respects Applicant’s assertion regarding improvement in computer-implemented redaction technology; however, it is unclear to the examiner how an improvement to computer-implemented technology can be introduced/implemented without a claimed computer performing said improvement. There is no connection between a computer and the redaction system, the only component claimed, in the claims, nor anywhere in the specification. Further, with regard to the claimed improvement being in the form of the “distinct primary and secondary triggers”, the examiner would like to note that both the primary and secondary triggers are not currently defined as anything more than “triggers” in the claims. The distinct operational modes cited by Applicant are not implemented into the claims and do not contribute towards eligibility from the specification. Further, as these distinct operational modes are not incorporated into the claims, there can be no improvement provided from these modes; therefore, the claims remain ineligible subject matter under 35 U.S.C. 101.
Applicant's arguments filed 11/28/2025, see pgs. 8-9, with respect to “Claim Rejection – 35 USC Section 102” have been fully considered but they are not persuasive.
Applicant’s representative asserts, “Claims 1-6 stand rejected under 35 U.S.C. § 102(a)(1) as anticipated by Channakshava et al. (US-10,728,384-B1) ('Channakshava'). Applicant respectfully traverses this rejection as Channakshava fails to teach each and every limitation of the claims. Specifically, Channakshava fails to disclose, either expressly or inherently, 'analyzing the recorded voice communication using a plurality of secondary triggers to identify a second data set of sensitive information and a corresponding second set of time slices,' as recited in claim 1. The Examiner asserts that Channakshava's disclosure of a 'second intent' for an 'expiration date' (Channakshava, col. 7, lines 1-6) after a 'first intent' for a 'credit card number' teaches the claimed 'plurality of secondary triggers.' This represents a fundamental misinterpretation of the claimed invention's architecture. The specification provides a clear functional distinction between primary and secondary triggers that is absent from Channakshava. As detailed in paragraph [0015], 'In contrast to the primary triggers, the secondary trigger moves both backwards and forward from the point of activation. Thus, if a primary trigger fails to remove all necessary data, the secondary trigger walks back to ensure any missed sensitive data is still removed.' Further, paragraph [0017] explains, 'The secondary triggers primarily serve as a failsafe in case the primary trigger fails to remove all required data.'
Channakshava merely teaches a sequence of functionally identical triggers, which it terms 'intents.' The 'second intent' for an expiration date operates in the same forward-looking manner as the 'first intent' for the credit card number. There is no disclosure in Channakshava of a distinct set of triggers that operate bidirectionally or that are activated conditionally as a failsafe mechanism. The triggers in Channakshava are all of the same primary type, merely targeting different keywords in a predictable sequence. Channakshava is missing the core architectural concept of a two-tiered trigger system where the secondary triggers have a different, more robust operational capability (bidirectional analysis) and serve a distinct purpose (failsafe) compared to the primary triggers. Because Channakshava fails to teach the claimed 'plurality of secondary triggers,' it necessarily also fails to teach the subsequent limitation of'combining the first set of time slices and the second set of time slices.' The Examiner's reliance on a 'consolidated intent' (Channakshava, col. 12, lines 30-40) is misplaced, as this describes generating a single, broader time slice from a single analysis, not combining the results of two distinct types of analyses (primary and secondary). As Channakshava is missing at least these material limitations, it cannot anticipate claim 1.
Dependent claims 2 and 4-6 are likewise patentable over Channakshava. These claims depend from allowable claim 1 and further include limitations not taught by Channakshava. For instance, claims 5 and 6 further define the distinct nature of the secondary triggers, reinforcing the patentable distinction of the two-tiered architecture over Channakshava's simple sequential system. Accordingly, withdrawal of the § 102 rejection is respectfully requested.
In response, the examiner would like to refer to the claim language as currently constructed. Specifically, with regard to the “plurality of secondary trigger”, the examiner respectfully asserts that the specificity with which Applicant is citing the secondary trigger in their arguments is not equivalent to that currently claimed. Independent claim 1 currently defines “using a plurality of secondary triggers to identify a second data set of sensitive information and a corresponding second set of time slices”. This is an equivalent definition to the first triggers. There is no further defining the secondary trigger to be moving both forward and backward in time from a point of activation; therefore, Applicant’s cited spec paragraphs of [0015] and [0017] do not incorporate the cited concepts into the claims. There is no claim language requiring the secondary triggers to be “a distinct set of triggers that operate bidirectionally or that are activated conditionally as a failsafe mechanism” (see pg. 9 of remarks). In view of this, the examiner respectfully asserts that Channakshava necessarily also teaches the limitation of “combining the first set of time slices and the second set of time slices” as the “two distinct types of analysis (primary and secondary)” are currently claimed to be equivalent; therefore, if Channakshava discloses primary triggers (not argued against by Applicant), secondary triggers are also disclosed as these types of triggers are currently defined to be equivalent.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., defining the secondary triggers to be operating under a different mechanism than the primary triggers, defining the secondary triggers to look backwards and forwards in time, failsafe mechanism, etc.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant's arguments filed 11/28/2025, see pgs. 9-11, with respect to “Claim Rejections – 35 USC Section 103” have been fully considered but they are not persuasive.
Applicant’s representative asserts, “Claims 7-15 stand rejected under 35 U.S.C. § 103 over Channakshava in view of Mandic et al. (US 2018/0054519 A1) ('Mandic'). Applicant respectfully traverses these rejections. Regarding claim 7, the Examiner alleges it would have been obvious to modify Channakshava's system to analyze for a 'first predetermined time period' as taught by Mandic. This conclusion is based on impermissible hindsight. Channakshava teaches a precise system that determines the analysis window by identifying the actual start and end of the sensitive data utterance (Channakshava, e.g., [0078]). Mandic, in contrast, teaches a less precise method of using a fixed, predetermined time period (Mandic, [0097]). A person of ordinary skill in the art would not have been motivated to combine Mandic's less precise method with Channakshava's system, as it would represent a degradation of Channakshava's precision. The Examiner's rationale that this would 'simplify' the system uses Applicant's disclosure as a blueprint for the combination.
Regarding claim 8, this claim depends from patentable claim 7 and is therefore also patentable. Furthermore, the Examiner's rationale for this rejection is flawed. The Examiner argues from silence that Channakshava does not disclose looking backward, but an argument from silence is insufficient to show a negative limitation is taught or suggested. More importantly, the Examiner acknowledges that Channakshava at [0067] teaches setting a start timestamp 'ten seconds prior to that time,' which is a form of backward-looking analysis. This teaching directly contradicts the claimed negative limitation, and thus the combination fails to teach the invention of claim 8.
Regarding claim 9, the Examiner alleges it would have been obvious to apply Mandic's bidirectional analysis to Channakshava's 'second intent.' This rejection relies on impermissible hindsight. The claimed invention lies in the specific two-tiered architecture where primary triggers are forward-looking and secondary triggers are bidirectional. There is no teaching or suggestion in the combined references that would motivate a skilled artisan to selectively apply Mandic's bidirectional analysis only to Channakshava's 'second intent' while leaving the 'first intent' as forward-only, thereby arriving at the claimed architecture. Such a selective combination is only obvious in view of Applicant's own disclosure. Claims 10-13 depend from patentable claims and are therefore also patentable. Each adds further specific limitations that are not taught or suggested by the prior art combination. Regarding claim 14, the Examiner improperly combines Channakshava with Mandic to find the claimed 'encrypting' step. The Examiner's reasoning conflates redaction with encryption. Mandic's discussion of a 'reduced need to encrypt' (Mandic, [0059]) does not teach that encryption is actually performed on the redacted communication. The Examiner's assertion that the output of Mandic's filter is 'encrypted' because sensitive information is removed is a misinterpretation of the term 'encryption,' which refers to a cryptographic process, not merely obscuring data. The references do not teach or suggest encrypting the final redacted voice communication.
Regarding claim 15, the proposed combination teaches away from the claimed invention.
Claim 15 recites storing an encrypted version of the original recorded voice communication. The stated purpose of Mandic's system is to *delete* the original audio file to reduce liability and data storage burdens (Mandic, [0029]). A person of ordinary skill in the art would not be motivated to combine the references in a manner that is directly contrary to the explicit teachings and purpose of Mandic. For these reasons, the rejections of claims 7-15 under 35 U.S.C. § 103 are respectfully traversed, and their withdrawal is requested.”
In response, the examiner would like to refer to the specific disclosures of Channakshava in view of Mandic. Specifically, with regard to claim 7, the examiner respectfully asserts that the precision levels of Channakshava and Mandic are equivalent. Channakshava discloses creating a start timestamp that correlates to ten seconds prior to a stop timestamp, [Col. 13, Lines 30-50]. Mandic discloses precursively moving an end of recording timestamp to a point in time which is a predetermined time period after the start time, [0097]. If the predetermined time period of Mandic is ten seconds, the operations of Channakshava and Mandic are equivalent. Both pieces of art are creating a time slice which corresponds to sensitive data to be redacted. It is unclear to the examiner how the precision levels of the two pieces of art are different when the operations for determined to-be-redacted portions of data are equivalent, performed with regard to the same timestamps, though with reversed directionality with respect to the timestamp.
With regard to claim 8, the examiner respectfully asserts that Applicant is misinterpreting the cited mapping. Specifically, Applicant’s cited section of Channakshava discloses looking back from an end stamp to determine a starting time stamp, i.e. setting the primary trigger. There is not an additional looking back operation past the primary trigger. As Applicant has amended the claim to recite “audio data occurring before the first detection event” corresponding to the first trigger, this indicates a looking-back operation with regard to the start timestamp. Nowhere does Channakshava disclose looking backwards with regard to the beginning of a “redaction” interval. It is unclear to the examiner how [0067] contradicts the assertion that Channakshava does not look backward from a first detection event, when the cited section of Channakshava looks back with regard to what would be a second detection event based on what was cited by Applicant.
With regard to claim 9, as previously disclosed, the motivation for combination based on time periods for analyzing data of Channakshava and Mandic exists as they both consider predetermined periods of time to be considered based upon a start/stop time stamp. In view of this motivation, the examiner respectfully asserts that their motivation as applied to claim 9 is not based upon impermissible hindsight reasoning.
With regard to claims 10-13, in view of the examiner’s assertion that independent claim 1 is not currently allowable (see response to args, 35 USC 102), these claims also maintain a rejected status as being dependent upon rejected base claims.
With regard to claim 14, the examiner respectfully asserts that a disclosure of “reduced need to encrypt” necessarily indicates that encryption is a part of the functionality of Mandic. Further, consider [0082] which also discloses “[The redaction process] reduces data storage charges and overhead and encryption processing time”. Reducing required encryption time necessarily requires encryption to be performed in order to provide this improvement.
With regard to claim 15, the examiner would like to refer to the different types of data stored in integrated database 76 of Fig. 11 of Mandic. Specifically, [0116] discloses “the call center database 24 is combined with the redaction database 60 forming a large, integrated database 76”, wherein [0063] discloses “the entire audio record of the comm session is stored by CC processor 18 both in temporary storage 22 and in CC database 24”. This clearly indicates storing a version of the recorded voice communication in association with the redacted voice communication as the databases are integrated together.
In response to all of applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "redaction system" in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. [0011] of the instant application disclosed “various functions of the redaction system 100 may be executed on a server, group of servers, of in a distributed cloud computing environment.” Fig. 1 of the instant application is comprised of engines and databases, wherein the terms “engine” and “database” are not defined anywhere.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-2, 4-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1-2, 4-15 are also rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Claim 1 recites a “redaction system” interpreted under 112(f) as noted above. The element claimed is detailed in Fig. 1 with a reference numeral of 100.
As to the written description requirement, MPEP 2161.01 I. details determining whether there is adequate written description for a computer-implemented functional claim limitation, noting:
Similarly, original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software. this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV.
Further as to the written description requirement, MPEP 2163.03 V details written description circumstances arising from original claims not sufficiently described, noting:
An original claim may lack written description support when (1) the claim defines the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved or (2) a broad genus claim is presented but the disclosure only describes a narrow species with no evidence that the genus is contemplated. See Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1349-50 (Fed. Cir. 2010) (en banc). The written description requirement is not necessarily met when the claim language appears in ipsis verbis in the specification. "Even if a claim is supported by the specification, the language of the specification, to the extent possible, must describe the claimed invention so that one skilled in the art can recognize what is claimed. The appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy that requirement. "Enzo Biochem, Inc. v. Gen-Probe, Inc., 323 F.3d 956, 968, 63 USPQ2d 1609, 1616 (Fed. Cir. 2002).
Further as to the written description requirement, MPEP 2163.03 VI details written description circumstances arising from indefiniteness of a means plus function limitation, noting:
A claim limitation expressed in means- (or step-) plus-function language "shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof." 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. If the specification fails to disclose sufficient corresponding structure. materials. or acts that perform the entire claimed function. then the claim limitation is indefinite because the applicant has in effect failed to particularly point out and distinctly claim the invention as required by 35 U.S.C. 112(b) or pre-A/A 35 U.S.C. 112, second paragraph. In re Donaldson Co., 16 F.3d 1189, 1195, 29 USPQ2d 1845, 1850(Fed. Cir. 1994) (en banc). Such a limitation also lacks an adequate written description as required by 35 U.S. C. 112(a) or pre-A/A 35 U.S. C. 112. first
paragraph, because an indefinite, unbounded functional limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See also MPEP § 2181.
As to the enablement requirement, MPEP 2163.02 details examples of enablement issues in computer programing cases, in particular in section II. Regarding block elements within a computer, noting:
While no specific universally applicable rule exists for recognizing an insufficiently disclosed application involving computer programs, an examining guideline to generally follow is to challenge the sufficiency of disclosures that fail to include the programmed steps. algorithms or procedures that the computer performs necessary to produce the claimed function. These can be described in any way that would be understood by one of ordinary skill in the art, such as with a reasonably detailed flowchart which delineates the sequence of operations the program must perform. In programming applications where the software disclosure only includes a flowchart, as the complexity of functions and the generality of the individual components of the flowchart increase, the basis for challenging the sufficiency of such a flowchart becomes more reasonable because the likelihood of more than routine experimentation being required to generate a working program from such a flowchart also increases.
The claimed redaction system of claim 1 is illustrated by element 100 in Fig. 1, with [0011] noting that the redaction system may be a generalized redaction system programmed to remove personally identifiable or sensitive information, and [0015] noting that the redaction system is used to analyze voice communications. The specification does not provide details as to how the voice communications are analyzed to be used for the voice redaction techniques disclosed in the claims. The specification only discloses the software, algorithm, and/or flow chart of these operations of the units at a high level, and does not provide the requisite level of detail (e.g. code/algorithm) necessary that would indicate to one of ordinary skill in the art:
That the inventor(s) at the time the application was filed, had possession of the claimed invention, or
How to make or use the invention without undue experimentation
All claims dependent upon rejected base claims are also rejected for failing to meet the written description and enablement requirements.
In sum, claims 1-2, 4-15 fail to meet the enablement requirement of 35 U.S.C. 112(a). The lack of disclosure of the code/algorithm to implement the redaction system (as described above) in a manner understandable to a person of ordinary skill in the art results in claimed subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Further, claims 1-2, 4-15 fail to meet the written description requirement of 35 U.S.C. 112(a). The lack of disclosure of the code/algorithms to implement the claimed redaction system (as detailed above) in a manner understandable to a person of ordinary skill in the art results in a failure to reasonably convey that the inventor(s) at the time the application was filed, had possession of the claimed invention.
Additionally, claims 7-13 also fail to meet the written description requirement of 35 U.S.C. 112(a) for the following rationale:
The method disclosed in claim 7, specifically, the step of “wherein each primary trigger of the plurality of primary triggers checks for sensitive content by identifying a first detection event” is not something disclosed in the specification. [0006] of the instant app discloses initiation of primary triggers based on detection of sensitive data. This does not indicate the triggers checking for sensitive content, instead it appears to the examiner as though this represents identification of triggers based on the textual content being analyzed by an engine, i.e. transcription engine 106, making it unclear how the trigger itself is able to check for sensitive content.
The method disclosed in claim 9, specifically, “wherein each secondary trigger of the plurality of second triggers checks for sensitive content by identifying a second detection event” is not something disclosed in the specification. [0006] of the instant application discloses initiation of primary triggers based on detection of sensitive data. Nowhere is there disclosure as to how the triggers are used to check for sensitive content, nor how secondary triggers can be used for second detection events, or what a second detection event is for that matter. This makes it unclear to the examiner how the operations of a secondary trigger are performed.
Dependent claims 8, 10-13 are also rejected under 35 U.S.C. 112(a) as being dependent upon rejected base claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The claim limitation “redaction system” in claim 1 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Claim 1 recites a “redaction system” interpreted under 112(f) as noted above. The element claimed is detailed in Fig. 1 with a reference numeral of 100. This system (100) is not disclosed as being directed to any particular structure. The only indication of some type of structure is provided in [0011] noting that the redaction system may be a generalized redaction system programmed to remove personally identifiable or sensitive information, and [0015] noting that the redaction system is used to analyze voice communications. Still, this leaves no description as to how the system if constructed to perform these actions. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
In view of the above independent claim rejection, all claims dependent upon a rejected base claim are also rejected under the same grounds. Therefore, claims 2, 4-15 are also rejected under 112(b) as being indefinite due to the indefinite structure of the redaction system.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent claim(s) 1 recite:
receiving a recorded voice communication at a redaction system;
analyzing the recorded voice communication using a plurality of primary triggers to identify a first data set of sensitive information and a corresponding first set of time slices;
analyzing the recorded voice communication using a plurality of secondary triggers to identify a second data set of sensitive information and a corresponding second set of time slices;
combining the first set of time slices and the second set of time slices to determine a combined set of time slices;
redacting any audio data from the recorded voice communication occurring within the combined set of time slices to generate a redacted voice communication; and,
storing the redacted voice communication in a voice communication database of the redaction system.
These limitations, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, the claim(s) read(s) on receiving audio, analyzing the audio based on a plurality of triggers to identify sensitive information and corresponding timings, analyzing audio for a plurality of second triggers and corresponding times, combining the first and second times to create time chunks, redacting within those time chunks, and storing the redacted audio. There are no added claim elements which preclude the steps from practically being performed in the mind. All of these steps can be performed in the mind and/or using pen and paper.
For example, the step of “receiving a recorded voice” is something that can be performed by mentally listening to audio playback. Analyzing audio using a plurality of primary triggers to identify sensitive information could reasonably be understood to be being presented with a written list of topics of sensitive content. Based on the provided list, a user could transcribe the text, compare words of the audio to the written triggers, and mark words as sensitive, wherein the timing information would be represented through the ordering of the words, i.e. a credit card number will come before a security code, indicating the credit card number has an earlier time. Further, time could be recording using any generic time keeping measure such as a clock or stopwatch. Analyzing audio using a plurality of secondary triggers to identify sensitive information could reasonably be understood to be being presented with a written list of topics of sensitive content. Based on the provided list, a user could transcribe the text, compare words of the audio to the written triggers, and mark words as sensitive, wherein the timing information would be represented through the ordering of the words, i.e. a credit card number will come before a security code, indicating the credit card number has an earlier time. The step of combining the first set of time slices and second set of time slices is equivalent to reviewing your previously transcribed text with associated timings, and segmenting the text based on where the times associated with triggers are located. This can be done with pen on paper. The step of redacting audio data occurring within the combined set of time slices can reasonably be understood to be performed through any generic audio recording/editing device resulting in a redacted voice communication. For example, a user could record themselves yelling in the intervals of audio between time slices and overlay the yelling using generic computing components, a user could mute the audio during these intervals (also using generic signal processing techniques/components), or any other known audio noising technique well known in the art. Further, a user could record a new audio clip wherein they don’t speak during the determined sensitive information intervals. This is equivalent to redacting audio from a recorded voice communication, in the form of a new audio file. Lastly, the step of storing redacted voice communication, i.e. a new audio file, is something that can be performed mentally, i.e. remembering the audio track, or on physical content such as a non-transitory computer readable medium in a physical repository.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea (Step 2A, Prong one, Yes).
This judicial exception is not integrated into a practical application because the addition of generically recited computer elements does not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception ( Step 2A, Prong two, No). As discussed above, with respect to integration of an abstract idea into a practical application, the additional element of "receiving", "analyzing", "combining", "redacting", "storing" are merely for the purpose of data gathering, storing, processing, and/or insignificant extra-solution activity that amount to no more than mere instructions to apply the exception using a generic computer component. The instant application does not disclose applying the invention to any kind of generic computing environment, or computing environment at all. Instructions to perform a mental process cannot provide an inventive concept. Therefore, the claims are not patent eligible (Step 2B, No).
Similarly, dependent claim(s) 215 include additional steps that are considered “insignificant extra-solution activity to judicial exception” because they fail to provide meaningful significance that goes beyond generally linking the use of an abstract idea to a particular technological environment.
For example, claim 2 reads on generating a text transcript of a recorded voice communication and redacting text in the transcript corresponding to sensitive data, i.e. occurring in the combined time slices. Generating a text transcript is equivalent to writing down heard speaking. Redacting text based on the text occurring within the combined set of time slices is also a mental process. During the transcription, a user will identify sensitive information, i.e. triggers, with time stamps (see claim 1 rejections). Redaction can be performed by crossing out the sensitive information or copying a new version of the transcript without the text in the combined time slices.
Claim 4 reads on the redacting comprising blanking, obfuscating, or cutting audio data from the recorded voice. As previously disclosed, redaction can take the form of a re-recording where the determined portions of text to be redacted are either not spoken or spoken over with irrelevant information/noise. Recording audio can be performed using any number of generic components. Determining which sections to redact based on previously determined sections of time slices is a mental process associated with reviewing previously mentally determined time slices.
Claim 5 reads on a secondary trigger being activated only if a primary trigger fails to identify sensitive content. Determining whether or not to activate a secondary trigger based on missed redacted sensitive info from a first trigger is equivalent to performing redaction, previously determined to be a mental process, for a second time on a previously redacted text, wherein a review, i.e. user feedback, is performed to identify previously missed redactions. Performing the mental process of a primary trigger redaction for a secondary trigger is also a mental process as both kinds of triggers are used for the same operation, i.e. redaction.
Claim 6 reads on the plurality of secondary triggers being different from the plurality of primary triggers. Determining triggers to be different between primary and secondary classifications is a mental process. A user can mentally determine which triggers are primary/secondary and also can make sure there are no overlaps between the categories as would be determined mentally through reading/writing lists of triggers.
Claim 7 reads on the primary trigger checking for sensitive content based on identification of a first detection event and analyzing audio for a predetermined time after the detection event. Determining detection events, i.e. detection of a trigger (wherein the triggers are word/number series, see [0014] of the instant app), is a mental process associated with listening to the audio recording for matches of written triggers. Analyzing audio for a predetermined time after the detection event is equivalent to the mental process of listening to audio after determining a user heard a keyword.
Claim 8 reads on not checking audio data before the detection event for primary triggers. Determining not to start listening to someone speaking until a certain word, i.e. trigger, is heard is a mental process.
Claim 9 reads on the secondary trigger checking for sensitive content by identifying a second detection event, wherein time periods both before and after the second detection event are analyzed. Determining to start listening after a keyword, i.e. detection event, is a mental process associated with listening. Determining to return to audio before the selected keyword is heard is also a mental process associated with listening which can be performed using any generic audio playback device through rewinding.
Claim 10 reads on the first detection event being the start of audio data associated with sensitive content. Determining whether or not audio data has sensitive content, based on a written list of trigger words is a mental process associated with listening to speaking. Labelling the start of sensitive audio data as a detection event is something that can be added to the written transcription of text, as previously disclosed.
Claim 11 reads on the first detection event being an identification of a phrase and/or number combination associated with sensitive content. Determining whether phrases, i.e. full names, or number combinations, i.e. credit card numbers, are sensitive based on received audio is a mental process of comparing the received audio to written detection events, i.e. triggers.
Claim 12 reads on a time slice associated with the first detection event being set to the beginning of a first detection event. Determining to begin a time slice at the beginning of a first detection event is a mental decision. The decision to start a time slice upon detection of a word is a mental process of hearing the word, and notating the starting time on a corresponding transcription.
Claim 13 reads on a time slice associated with the first detection event being set to the end of a first detection event. Determining to begin a time slice at the end of a first detection event is a mental decision. The decision to start a time slice upon detection of ending of a word is a mental process of hearing the word, and notating the time starting on a corresponding transcription, wherein the time would be at the end of the determined detection event.
Claim 14 reads on encrypting the redacted voice before storage. Encryption is a well-known data anonymization technique in the art before the effective filing date of the claimed invention. Further, encryption can be performed using a mentally determined encryption key written on paper to translate the audio. Therefore, it does not provide an inventive step.
Claim 15 reads on storing an encrypted version of the recorded voice in the voice communication database in association with the redacted voice communication database. Storing an encryption, i.e. a written string of numbers/letters corresponding to original text, is something that can be done in a physical file of encryptions.
Therefore, these claims are also not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-6 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Channakshava et al. (US-10728384-B1), hereinafter Channakshava.
Regarding claim 1, Channakshava discloses: a method of redacting voice communication (Abstract, sensitive information is redacted from the call recording), comprising:
receiving a recorded voice communication ([Col. 3, Lines 41-42] The recording module 120 records a conversation between at least two people [Indicating audio data, i.e. voice communication]) at a redaction system ([Col. 7, Lines 50-51] The redacting module 150 redacts the sensitive data 193 from the audio data 191);
analyzing the recorded voice communication using a plurality of primary triggers to identify a first data set of sensitive information ([Col. 6, Lines 48-50] a first intent may be detected after an agent asks the customer for a credit card number, and the customer provides the credit card number. Here, two keywords may define an intent of “credit card number” in which the first keyword may be the phrase “credit card number” and the second keyword may be a number that conforms to an industry standard for credit card numbers [The phrase “credit card number” tracks to a trigger, wherein a credit card number is inherently sensitive information, the phrase and corresponding number tracks to a data set of sensitive information]) and a corresponding first set of time slices ([Col. 10, Lines 23-24] The detecting module 140 also generates a start timestamp… for each intent);
analyzing the recorded voice communication using a plurality of secondary triggers to identify a second data set of sensitive information ([Col. 7, Lines 1-6] a second intent may be detected after an agent asks the customer for an expiration date of the credit card, and the customer provides the expiration date. Here, two keywords may define an intent of “expiration date” in which the first keyword may be the phrase “date on the card” followed by a second keyword of a month and a year. [The phrase “date on the card” tracks to a trigger, wherein an expiration date is inherently sensitive information, the phrase and corresponding number tracks to a data set of sensitive information]) and a corresponding second set of time slices ([Col. 10, Lines 23-24] The detecting module 140 also generates… a stop timestamp for each intent);
combining the first set of time slices and the second set of time slices to determine a combined set of time slices ([Col. 12, Lines 30-40] In this example, the detecting module 140 would detect such a consolidated intent and generate a single start timestamp for the beginning of the credit card number and a single stop timestamp for the end of the validation number. Such single start and stop timestamps in this example would indicate the portion of the audio data 191 to be redacted [A start timestamp, i.e. slice, for beginning of a credit card number, i.e. a primary trigger, and an end timestamp for end of a validation number, i.e. a secondary trigger, wherein those timestamps are used to identify sensitive data, indicates the two timestamps, i.e. slices, are combined to identify the section of sensitive data]);
redacting any audio data from the recorded voice communication occurring within the combined set of time slices to generate a redacted voice communication ([Col. 7, Lines 55-60] A portion of the audio data 191 to be redacted is determined based on a beginning timestamp and an ending timestamp, in which the sensitive data 193 is contained within the beginning timestamp and the ending timestamp [A beginning and ending timestamp track to first and second time slices used in conjunction, resulting in a combined set of time slices, wherein audio data being redacted will necessarily result in a redacted voice communication, see example redacted conversation of Fig. 5 with associated time stamps/slices in Fig. 4]); and,
storing the redacted voice communication in a voice communication database of the redaction system ([Col. 18, Lines 32-34] The redacting module 150 stores the redacted version of the redacted conversation 510 as redacted audio data 295 [Wherein the redacted audio data is clearly stored in a database responsible for voice communication, see where audio data 191 of Fig. 2 is stored, conversation database 190]).
Regarding claim 2, Channakshava discloses: the method according to claim 1.
Channakshava further discloses:
generating a text transcript of the recorded voice communication ([Col. 3, Lines 7-8] The recording of the voice conversation is transcribed into text);
redacting any text in the text transcript corresponding to the audio data from the recorded voice communication occurring within the combined set of time slices ([Col. 17, Lines 24-26] In this example, as shown in column 415, the text for row 426 is redacted based on the start timestamp of column 411 and the stop timestamp of column 413 [A start and stop timestamp for redacting text indicates those individual timestamps are combined to form the redaction interval]); and,
storing the text transcript in a transcript database ([Fig. 2, Transcript Data 192 within larger conversation database 190]).
Regarding claim 4, Channakshava discloses: the method according to claim 1.
Channakshava further discloses:
wherein the redacting comprises blanking ([Col. 7, Lines 53-57] The redacting module 150 performs the redaction by replacing a portion of the audio data 191 associated with the sensitive data 193 with a redaction message, such as the spoken word “redacted,” silence, and other redacted messages [Silence tracks to blanking or cutting out sensitive audio]), obfuscating ([Col. 14, Lines 4-5] The redaction can have a variety of forms such as replacing the sensitive data 193 with white noise [White noise tracks to a method of obfuscation]), or cutting the audio data ([See mapping to blanking]) from the recorded voice communication occurring within the combined set of time slices ([Col. 7, Lines 57-59] portion of the audio data 191 to be redacted is determined based on a beginning timestamp and an ending timestamp [Determining a portion of audio data based on two timestamps, i.e. slices, indicates those timestamps are combined]).
Regarding claim 5, Channakshava discloses: the method according to claim 1.
Channakshava further discloses:
wherein at least one secondary trigger of the plurality of secondary triggers is activated only if a corresponding primary trigger of the plurality of primary triggers fails to identify sensitive content ([Col. 7, Lines 40-50] In this example, the keyword “expiration date” may not only indicate that sensitive data of a credit card expiration will follow from a customer's response, but also that the preceding statement by the customer likely contained a credit card number…[Col. 11, Lines 25-45] the preceding closing intent may also be utilized as an opening intent for a second redaction. For example, the preceding example of a customer providing a credit card number could be an opening intent that the customer will next provide an expiration date [Looking at a preceding, i.e. primary, trigger, i.e. statement, for information about what could possibly be contained within a current statement, wherein the statements are clearly related, i.e. credit card number and expiration date, indicates a secondary trigger activated for an expiration date based on a primary trigger of credit card number, wherein the primary trigger will fail to identify the sensitive content of an expiration date as it is specific to the “credit card number” phrase and associated card number time slice as previously disclosed (see Col. 6, Lines 45-55). Further, the expiration date is clearly sensitive content, using a second redaction indicates failure of a first redaction to redact all sensitive information. Further, consider the timestamp disclosed in [Col. 13, Lines 35-45] which looks behind in time from a current point for missed sensitive content, indicating a second trigger based on a failed first trigger in view of the above triggers]).
Regarding claim 6, Channakshava discloses: the method according to claim 1.
Channakshava further discloses:
wherein the plurality of secondary triggers are different than the plurality of primary triggers ([Col. 6, Lines 50-55] Here, two keywords may define an intent of “credit card number” in which the first keyword may be the phrase “credit card number” and the second keyword may be a number that conforms to an industry standard for credit card numbers… [Col. 7, Lines 1-6] a second intent may be detected after an agent asks the customer for an expiration date of the credit card, and the customer provides the expiration date. Here, two keywords may define an intent of “expiration date” in which the first keyword may be the phrase “date on the card” followed by a second keyword of a month and a year [In these pairs of examples, primary and secondary triggers can represent either the first and second keywords of the individual intents, i.e. credit card number and the associated number, or the different intents could also be reasonably be understood to be triggers, i.e. credit card number and expiration date, in view of primary and secondary triggers 202 and 204 of Fig. 2 of the instant application]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Channakshava in view of Mandic et al. (US-20180054519-A1), hereinafter Mandic.
Regarding claim 7, Channakshava discloses: the method according to claim 1.
Channakshava further discloses:
wherein each primary trigger of the plurality of primary triggers checks for sensitive content by identifying a first detection event ([Col. 10, Lines 4-10] The detecting module 140 processes the transcript data 192 for utterances of a conversation and determines which utterances describe an intent based on the intent data 294. For example, an intent may be to say a credit card number, and the detecting module 140 determines when the utterance associated with the intent began and when it ended [Detecting a credit card number, i.e. first detection event, and determining start and end times associated with that detection event, i.e. recitation of the card number, indicates the triggers are checking for sensitive content, i.e. the numbers, to know when to set an utterance end time, wherein a primary trigger could be “credit card number” or the start of the number itself as previously disclosed by Channakshava]).
Channakshava does not disclose:
analyzing the audio data in the recorded voice communication for a first predetermined time period after the first detection event.
Mandic discloses:
analyzing the audio data in the recorded voice communication for a first predetermined time period after the first detection event ([Figs. 6, 7], [0097] Precursively moving back the EOR timing marker back to an earlier time using upon an algorithm based upon the start record time marker, such as advancing the EOR to a point in time a predetermined time period after the start time [Wherein EOR represents an end of recording, moving the end of recording to a specific point a predetermined period of time based on a start record time, in view of the triggers/detection events to start/stop recording of Channakshava, indicates a predetermined time period to analyze voice communication after a first detection event]).
Channakshava and Mandic are considered analogous art within audio redaction systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Channakshava to incorporate the teachings of Mandic, because of the novel way to trim audio files based on content and timing of content, reducing the heavy burden (in data storage and processing costs) imposed on call centers for sensitive data identification and redaction (Mandic, [0017]).
Regarding claim 8, Channakshava in view of Mandic discloses: the method according to claim 7.
Channakshava further discloses:
wherein the audio data occurring before the first detection event is not checked for each primary trigger of the plurality of primary triggers ([Col. 10, Lines 25-35] The detecting module 140 also generates a start timestamp and a stop timestamp for each intent represented by the sensitive data 193. After the call between the agent and the customer is complete, and the applicable audio data 191 is stored in the conversation database 190, then the redacting module 150 redacts the portion of the audio data 191 that is associated with the sensitive data 193 based on the respective start and stop timestamps of each detected intent [Channakshava does not disclose looking behind from detected sensitive data, indicating audio data before the detection event, i.e. start timestamp, is not handled. The examiner would like to note that [Col. 13, Lines 30-45] discloses setting a start timestamp based on a predetermined time prior to an end timestamp, but setting a timestamp for redaction does not indicate checking of audio for triggers. Further, setting a start timestamp based on a predetermined time prior to an end timestamp is an operation equivalent to checking audio data before a second detection event associated with a secondary trigger, not a first detection event with an associated primary trigger]).
Regarding claim 9, Channakshava in view of Mandic discloses: the method according to claim 7.
Channakshava further discloses:
wherein each secondary trigger of the plurality of secondary triggers checks for sensitive content by identifying a second detection event ([Col. 7, Lines 1-5] a second intent may be detected after an agent asks the customer for an expiration date of the credit card, and the customer provides the expiration date. Here, two keywords may define an intent of “expiration date” in which the first keyword may be the phrase “date on the card” followed by a second keyword of a month and a year [In view of the previously disclosed credit card number detection event, indicating that an expiration date can be a second detection event associated with the sensitive data triggers of either a date or words “expiration date”]).
Channakshava does not disclose:
analyzing the audio data in the recorded voice communication for a second predetermined time period after the second detection event, and analyzing the audio data in the recorded voice communication for a third predetermined time period before the second detection event.
Mandic discloses:
analyzing the audio data in the recorded voice communication for a second predetermined time period after the second detection event ([Figs. 6, 7], [0097] Precursively moving back the EOR timing marker back to an earlier time using upon an algorithm based upon the start record time marker, such as advancing the EOR to a point in time a predetermined time period after the start time [Wherein EOR represents an end of recording, moving the end of recording to a specific point a predetermined period of time based on a start record time, in view of the triggers/detection events to start recording of Channakshava, indicates a predetermined time period to analyze voice communication after a second detection event]), and analyzing the audio data in the recorded voice communication for a third predetermined time period before the second detection event ([0085] As an example, if the agent was to request an account number from the customer, and the agent activated the manual record ON function after the customer speaks the account number, the recording of the audio session would be reset to a time earlier than the manual record ON time trigger… [0086] As an example, if the designated data field contains credit card account information that the agent is manually entering account data based upon an oral presentation of information supplied by the customer, the automatic redaction system needs to begin recording the audio segment at a time prior to the detection of an input into the designated data field, that is, prior to t-df-1. Therefore a precursive predetermined time period is added to t-df-1 [In view of the audio analysis for triggers of Channakshava indicating adding time before a detection event for analysis of sensitive content, further in view of the sensitive content identification/analysis and associated first/second detection events of Channakshava, wherein the method of Mandic could be applied to either the first or second detection events of Channakshava with no change in functionality]).
Channakshava and Mandic are considered analogous art within audio redaction systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Channakshava to incorporate the teachings of Mandic, because of the novel way to trim audio files based on content and timing of content, reducing the heavy burden (in data storage and processing costs) imposed on call centers for sensitive data identification and redaction (Mandic, [0017]).
Regarding claim 10, Channakshava in view of Mandic discloses: the method according to claim 7.
Channakshava further discloses:
wherein the first detection event is recognition of a start of audio data associated with sensitive content ([Col. 10, Lines 5-10] The detecting module 140 processes the transcript data 192 for utterances of a conversation and determines which utterances describe an intent based on the intent data 294. For example, an intent may be to say a credit card number, and the detecting module 140 determines when the utterance associated with the intent began and when it ended [Knowing when to set start/stop timestamps corresponding to information to be redacted indicates recognition of audio data associated with sensitive content for redaction]).
Regarding claim 11, Channakshava in view of Mandic discloses: the method according to claim 10.
Channakshava further discloses:
wherein the first detection event is an identification of a phrase ([Col. 7, Line 5] the first keyword may be the phrase “date on the card”) or number combination ([Col. 7, Line 6] a second keyword of a month and a year [Generally represented with six numbers, i.e. MM/YYYY]) associated with sensitive content ([Information relating to a credit card is sensitive information]).
Regarding claim 12, Channakshava in view of Mandic discloses: the method according to claim 11.
Channakshava further discloses:
wherein a time slice associated with the first detection event is set to a beginning of the first detection event ([Col. 5, Lines 55-57] Components of the transcript data 192 have an associated beginning timestamp indicating the time of the beginning of the keyword within the audio data 191 [In view of the previously disclosed keywords of Channakshava, i.e. “credit card number”, indicating the keyword is used to represent identification of sensitive information, i.e. a first detection event]).
Regarding claim 13, Channakshava in view of Mandic discloses: the method according to claim 11.
Channakshava further discloses:
wherein a time slice associated with the first detection event is set to an end of the first detection event ([Col. 5, Lines 58-62] an associated ending timestamp indicating the time of the end of a component within the audio data 191. It is to be understood that a component of the transcript data 192 may be any speech formulation such as a single word, a plurality of words [Determining endings of components, wherein the components can represent sensitive information as previously disclosed, indicates an end of a detection event in view of the first detection event, i.e. credit card number, of Channakshava]).
Regarding claim 14, Channakshava disclose: the method according to claim 1.
Channakshava does not disclose:
encrypting the redacted voice communication prior to storage.
Mandic discloses:
encrypting the redacted voice communication prior to storage ([Fig. 11, Redact Filter 46, Redact DB 60], [0059] In this manner call-center 10 can delete the entire audio file thereby trusting the TTP redaction system 12 to handle the entire raw audio file and ultimately provide back only non-redacted audio segments… reduced need to encrypt numerous, large files [Indicating encryption is still a reduced, but required, process], [0124] There is a “saved audio” data store which can be located in integrated database 76 or redaction database 60. Databases 76,60 permanently save the saved audio file which is generated as throughput by the filter [Wherein the filter is responsible for removing sensitive content, indicating the output from the filter is “encrypted” with sensitive information obscured/removed. In view of Fig. 11 where the encryption happening in the filter occurs before storage]).
Channakshava and Mandic are considered analogous art within audio redaction systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Channakshava to incorporate the teachings of Mandic, because of the novel way to trim audio files based on content and timing of content, reducing the heavy burden (in data storage and processing costs) imposed on call centers for sensitive data identification and redaction (Mandic, [0017]).
Regarding claim 15, Channakshava discloses: the method according to claim 1.
Channakshava does not disclose:
storing an encrypted version of the recorded voice communication in the voice communication database in association with the redacted voice communication database.
Mandic discloses:
storing an encrypted version of the recorded voice communication in the voice communication database in association with the redacted voice communication database ([0124] There is a “saved audio” data store which can be located in integrated database 76 or redaction database 60. Databases 76,60 permanently save the saved audio file which is generated as throughput by the filter [Wherein the filter is responsible for removing sensitive content, indicating the output from the filter is “encrypted” with sensitive information obscured/removed. In view of integrated database 76 of Fig. 11 indicating the encrypted audio is stored in association with the redaction voice communication database 60 of Mandic, i.e. either in the redaction database 60 or a separate part of the larger integrated database 76]).
Channakshava and Mandic are considered analogous art within audio redaction systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Channakshava to incorporate the teachings of Mandic, because of the novel way to trim audio files based on content and timing of content, reducing the heavy burden (in data storage and processing costs) imposed on call centers for sensitive data identification and redaction (Mandic, [0017]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dwyer et al. (US-20150195406-A1) discloses “Methods and systems are provided for receiving a communication, analyzing the communication in real-time or near real-time using a computer-based communications analytics facility for at least one of a language characteristic and an acoustic characteristic, wherein for analyzing the language characteristic of voice communications, the communication is converted to text using computer-based speech recognition, determining at least one of the category, the score, the sentiment, or the alert associated with the communication using the at least one language and/or acoustic characteristic, and providing a dynamic graphical representation of the at least one category, score, sentiment, or alert through a graphical user interface” (abstract). Specifically, [0164]-[[0184] discloses redacting sensitive data based on time associated with the data.
Schachter et al. (US-20130266127-A1) discloses “Systems and methods for, among other things, removing sensitive data from an recording. The method, in certain embodiments, includes receiving an audio recording of a call and a text transcription of the audio recording, identifying events which occur during the call by detecting characteristic audio patterns in the audio recording and selected keywords and phrases in the text transcription, determining, from the identified events, a first event which precedes sensitive data in the call and a second event which occurs after sensitive data in the call, determining a portion of the call containing sensitive data with a start time at the first event and an end time at the second event, and removing the portion of the call between the start time and end time from the audio recording.” (abstract). See entire document.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEODORE JOHN WITHEY whose telephone number is (703)756-1754. The examiner can normally be reached Monday - Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571) 272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THEODORE WITHEY/Examiner, Art Unit 2655 /ANDREW C FLANDERS/Supervisory Patent Examiner, Art Unit 2655