DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
2. Receipt of Applicant’s Amendment filed on 02/10/2026 is acknowledged.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims (1-14) are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Under the 2019 PEG, when considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A prong 1), and if so, it must additionally be determined whether the claim is integrated into a practical application (step 2A prong 2). If an abstract idea is present in the claim without integration into a practical application, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself (step 2B).
In the instant case, claims (1-14) are directed to a system and method respectively. Thus, each of the claims falls within one of the four statutory categories. However, the claims also fall within the judicial exception of an abstract idea.
Under Step 2A Prong 1, the test is to identify whether the claims are “directed to” a judicial exception. The examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to mental processes, specifically parsing data.
The examiner further notes that claims (1-14) recite a non-transitory machine-readable storage medium for parsing data which is similar to themes defined above of method of mental processes such as performing the generation and comparison of vectors, and is similar to the abstract idea identified in the 2019 PEG in grouping “c” in that the claims recite certain methods of mental processes such as performing the parsing of data. The limitations, substantially comprising the body of the claim, recite a process of parsing data. The examiner notes that the claimed invention parses data. Because the limitations above closely follow the steps in parsing data, and the steps of the claims involve mental processes, the claim recites an abstract idea consistent with the “mental processes” grouping set forth in the 2019 PEG.
Claim 1:
A non-transitory machine-readable storage medium comprising instructions that upon execution cause a system to: receive delimiter separated value (DSV) data;
categorize a character in the DSV data into a selected layer of a plurality of layers;
wherein characters in a first layer of the plurality of layers comprise data characters;
characters in a second layer of the plurality of layers comprise delimiters; and
characters in a third layer of the plurality of layers comprise grouping symbols to group a string of characters into a semantic unit; and
parse the DSV data according to the categorizing.
These limitations, as drafted, is an apparatus that, under its broadest reasonable interpretation, covers the performance of mental processes specifically parsing data. Parsing data has long before the modern computer was invented, and continues to be predominantly a product of human endeavor. The instant application is directed to parsing data. Additionally, the categorization of data into the three defined layers can be performed by a human via their mind and/or pen & paper. Furthermore, the parsing of the categorized data can be performed by a human via their mind and/or pen & paper. Because the limitations above closely follow the steps of parsing data, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind and/or pen & paper, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the 2019 PEG.
The mere nominal recitation of generic computing components such as a non-transitory machine-readable storage medium does not take the claim out of certain methods of mental processes grouping. Therefore, the limitation is directed to an abstract idea.
If the claims are directed toward the judicial exception of an abstract idea, it must then be determined under Step 2A Prong 2 whether the judicial exception is integrated into a practical application. The Examiner notes that considerations under Step 2A Prong 2 comprise most the consideration previously evaluated in the context of Step 2B. The Examiner submits that the considerations discussed previously determined that the claim does not recite “significantly more” at Step 2B would be evaluated the same under Step 2A Prong 1 and result in the determination that the claim does not integrate the abstract idea into a practical application. Specifically, the receiving of DSV data is simply a data gathering step that is an insignificant extra-solution activity and does not integrate the abstract idea into a practical application.
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to an apparatus instructing the reader to implement the identified apparatus of mental processes of parsing data. The elements of the claim do not themselves amount to an improvement to the computer, to a technology or another technical field.
Here, the claim elements entirely comprise the abstract idea, leaving little if any aspects of the claim for further consideration under Step 2A Prong 2. In short, the claims have failed to integrate a practical application (see at least 84 Fed. Reg. (4) at 55). Under the 2019 PEG, this supports the conclusion that the claim is directed to an abstract idea, and the analysis proceeds to Step 2B.
While many considerations in Step 2A need not be reevaluated in Step 2B because the outcome will be the same. Here, on the basis of the additional elements other than the abstract idea, considered individually and in combination as discussed above, the Examiner respectfully submits that the claim 1 does not contain any additional elements that individually or as an ordered combination amount to an inventive concept and the claims are ineligible.
With respect to the dependent claims do not recite anything that is found to render the abstract idea as being transformed into a patent eligible invention. The dependent claims are merely reciting further embellishments of the abstract idea and do not claim anything that amounts to significantly more than the abstract idea itself.
With respect to the dependent claims, they have been considered and are not found to be reciting anything that amounts to being significantly more than the abstract idea. Claims 2-14 are directed to further embellishments of the central theme of the abstract idea in that the claims are directed to further embellishments of the parsing data of the steps of claim 1 and do not amount to significantly more.
Specifically, claim 2 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Furthermore, claim 3 is directed towards the categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Moreover, claim 4 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Additionally, claim 5 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Furthermore, claim 6 is directed towards the defining of the third layer of categorization and subsequent categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Moreover, claim 7 is directed towards the categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Additionally, claim 8 is directed towards the categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Furthermore, claim 9 is directed towards the categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Moreover, claim 10 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Additionally, claim 11 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Furthermore, claim 12 is directed towards the defining of the third layer of categorization and subsequent categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Moreover, claim 13 is directed towards the defining of the third layer of categorization which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Additionally, claim 14 is directed towards the defining of the third layer of categorization and subsequent categorization of data which can be performed by a human via their mind and/or pen & paper and does not amount to significantly more.
Claim Rejections - 35 USC § 102
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
7. Claims 1-5 and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by LogViewPlus (Wayback Article entitled “LogViewPlus User Guide”, dated 25 October 2022, available at https://web.archive.org/web/20221025101343/https://www.logviewplus.com/docs/logviewplus_documentation.pdf).
8. Regarding claim 1, LogViewPlus teaches a non-transitory machine-readable storage medium comprising:
A) instructions that upon execution cause a system to: receive delimiter separated value (DSV) data (Pages 41 and 55);
B) categorize a character in the DSV data into a selected layer of a plurality of layers (Pages 41, 55, 56, and 63-64);
C) wherein characters in a first layer of the plurality of layers comprise data characters (Pages 41, 55, 56, and 63-64);
D) characters in a second layer of the plurality of layers comprise delimiters (Pages 41, 55, 56, and 63-65); and
E) characters in a third layer of the plurality of layers comprise grouping symbols to group a string of characters into a semantic unit (Pages 41, 55, 56, and 63-64); and
F) parse the DSV data according to the categorizing (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “instructions that upon execution cause a system to: receive delimiter separated value (DSV) data” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41) and “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files” (Page 55). The examiner further notes that the parsing of log files in a delimited format teaches entails receiving such DSV log files in the first place. The examiner further notes that LogViewPlus teaches “categorize a character in the DSV data into a selected layer of a plurality of layers” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread…Priority…Logger…Message…Any Word…Multiple Words” (Page 63). The examiner further notes that the assigning of meaning to various different elements within a DSV log file teaches the claimed categorizing. The examiner further notes that LogViewPlus teaches “wherein characters in a first layer of the plurality of layers comprise data characters” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread…Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes data characters. The examiner further notes that LogViewPlus teaches “characters in a second layer of the plurality of layers comprise delimiters” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread…Priority…Logger…Message…Any Word…Multiple Words… Finally, there are two additional special characters which can be used to process whitespace…Newline…Tab” (Pages 63-65). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes whitespaces (i.e. delimiters in the broadest reasonable interpretation). Additionally, commas (i.e. delimiters) are also “categorized”. The examiner further notes that LogViewPlus teaches “characters in a third layer of the plurality of layers comprise grouping symbols to group a string of characters into a semantic unit” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as double quotes or brackets). The examiner further notes that LogViewPlus teaches “parse the DSV data according to the categorizing” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread…Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log is subsequently used to parse the specified DSV log.
Regarding claim 2, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the characters in the third layer comprise a token definer to define a token comprising a string value in a field of the DSV data (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “wherein the characters in the third layer comprise a token definer to define a token comprising a string value in a field of the DSV data” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as double quotes or brackets) (i.e. token definers) that include characters within them. Moreover, Paragraph 114 of the instant specification states that double quotes are an example of token definers (See “Token definers are in the form of double quotes (”)” (Paragraph 114)).
Regarding claim 3, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the instructions upon execution cause the system to: categorize the character as the token definer in the third layer based on detecting that the character is part of a first character set and the character follows a delimiter or a start of stream indicator (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “wherein the instructions upon execution cause the system to: categorize the character as the token definer in the third layer based on detecting that the character is part of a first character set and the character follows a delimiter or a start of stream indicator” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as double quotes or brackets) (i.e. token definers) that include characters within them. Such token definers can clearly be started after a delimiter (such as a comma).
Regarding claim 4, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the characters in the third layer comprise a literal grouper to group the string of characters into data characters of the semantic unit (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “wherein the characters in the third layer comprise a literal grouper to group the string of characters into data characters of the semantic unit” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as double quotes or brackets) (i.e. examples of a literal grouper) that include characters within them. Moreover, Paragraph 115 of the instant specification states that double quotes are an example of a literal grouper (See “Literal groupers are also in the form of double quotes (”)” (Paragraph 115)).
Regarding claim 5, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the literal grouper is an opening literal grouper, and the string of characters is defined between the opening literal grouper and an ending literal grouper, and wherein all characters between the opening literal grouper and the ending literal grouper are data characters (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “wherein the literal grouper is an opening literal grouper, and the string of characters is defined between the opening literal grouper and an ending literal grouper, and wherein all characters between the opening literal grouper and the ending literal grouper are data characters” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as double quotes or brackets) (i.e. examples of a literal grouper) that include characters within them (i.e. the characters within the starting double quotation mark and ending double quotation mark). Moreover, Paragraph 115 of the instant specification states that double quotes are an example of a literal grouper (See “Literal groupers are also in the form of double quotes (”)” (Paragraph 115)).
Regarding claim 10, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the characters in the third layer comprise a nesting grouper to nest characters into a nested group (Pages 41, 55, 56, and 63-64).
The examiner notes that LogViewPlus teaches “wherein the characters in the third layer comprise a nesting grouper to nest characters into a nested group” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as brackets) (i.e. examples of a nested group) that include characters within them. Moreover, Figure 4 of the instant specification depicts brackets as being a nesting grouper.
Claim Rejections - 35 USC § 103
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
11. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
12. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over LogViewPlus (Wayback Article entitled “LogViewPlus User Guide”, dated 25 October 2022, available at https://web.archive.org/web/20221025101343/https://www.logviewplus.com/docs/logviewplus_documentation.pdf) as applied to claims 1-5 and 10 above, and further in view of Chen (U.S. PGPUB 2021/0365481).
13. Regarding claim 11, LogViewPlus further teaches a non-transitory machine-readable storage medium comprising:
A) wherein the nesting grouper is a first nesting grouper (Pages 41, 55, 56, and 63-64).
The examiner further notes that LogViewPlus teaches “wherein the nesting grouper is a first nesting grouper” as “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values… a delimiter separated value file may have values which are surrounded by quotes and some fields may even contain the delimiter. Also, note that the above conversion patterns do not define a new line %n conversion specifier. In the case of a delimiter separated value file, the log entry is complete only when all expected fields have been read successfully. This means that fields can be multiline. Fields that run multiple lines, and fields that contain the delimiter as part of their values must be enclosed in quotes. Quotes can be either single or double. This will be determined inline. If the field starts with a quote character, then the same character will be the expected to close the field. To escape a quote character within a field value, you must use two quote characters back to back. For example, '' (two singles) or "" (two doubles)” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words” (Pages 63-64). The examiner further notes that the assigning of meaning to various different elements within a DSV log file includes grouping symbols (such as brackets) (i.e. examples of a nested group) that include characters within them. Moreover, Figure 4 of the instant specification depicts brackets as being a nesting grouper (which can be a first nesting grouper).
LogViewPlus does not explicitly teach:
B) wherein the nested group defined by the first nesting grouper is contained within or contains a second nested group defined by a second nesting grouper in the DSV data.
Chen, however, teaches “wherein the nested group defined by the first nesting grouper is contained within or contains a second nested group defined by a second nesting grouper in the DSV data” as “the character data (CDATA) is used and the structured data content is embedded within the CDATA section, which starts with (<![CDATA[) and ends with (]]>). In this example, the structured data content (Robert(Jason(John),Lily(Ken,Bill,Doug))) is enclosed in CDATA. The format of the nested parenthesis reflects the tree diagram 500 shown in FIG. 5. In this format, a sequence of alphanumeric characters may be designated as a “value” or “name” and the left parenthesis (( ) right parenthesis ( )), and comma symbols (,) may be used as delimiters. A parsing algorithm may be configured to distinguish between the “values” and the “delimiters.” The parsing algorithm may also be configured to determine a relationship between the values based on a sequence of delimiters between the values. Different parsing algorithms may be defined for different data types and for different structure types. For example, “Robert” is the first sequence of alphanumeric characters listed in the structured data content shown in Listing 2 above (e.g., Robert comes immediately after left square bracket symbol designating the end of the CDATA opening delimiter sequence). Accordingly, “Robert” is the root node 501 of the tree 500. A child relationship may be determined based on “Jason” being separated from another value by one left parenthesis (( ) any number of comma symbols (,), and any number of pairs of left parenthesis (( ) and right parenthesis ( )) with any characters or symbols in between the pairs of parenthesis. For example, “Jason” has a child relationship to “Robert” because they are separated by one left parenthesis (( ), “Lily” also has a relationship to “Robert” because Lily is separated from “Robert” by one left parenthesis, one pair of left and right parenthesis (e.g., the pair enclosing “John”) and one comma symbol (e.g., the comma immediately before “Lily”). Furthermore, the sibling relationship between “Jason” and “Lily” may be defined by these values being separated by any number of comma symbols and another number of pairs of left and right parenthesis. Thus, the relationships shown in the tree diagram 500 reflect the particular structure of the values and the delimiters in the structured data content” (Paragraphs 69-70).
The examiner further notes that although the primary reference of LongViewPlus teaches the use of nesting groups, there is no explicit teaching of a nesting group within another nesting group. Nevertheless, the secondary reference of Chen teaches the concept of nesting groups within nesting groups (See example of John being inside Jason via the use of multiple parenthesis). The combination would result in the nesting groups of LongViewPlus to have nesting groups within them.
It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Chen’s would have allowed LogViewPlus’s to provide a method for expanding the types of data that is represented, as noted by Chen (Paragraph 68).
Allowable Subject Matter
14. Claims 6, 12, and 14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Specifically, although the prior art (See LogViewPlus) clearly parses DSV data, the detailed limitations directed towards the categorization being performed based off the defined parameters is not found in the prior art, in conjunction with the rest of the limitations of the parent claims.
Dependent claims 7-9 and 13 are deemed allowable for depending on the deemed allowable subject matter of dependent claims 6 and 12 respectively.
Response to Arguments
15. Applicant's arguments filed 02/10/2026 have been fully considered but they are not persuasive.
Applicants argue on Page 06 that “The claims are not directed to a judicial exception because they recite a specific technical parsing framework that cannot practically be performed in the human mind and, at minimum, integrate any alleged exception into a practical application that improves computer data parsing reliability and accuracy. The rules-based, layered categorization here improves computer functionality in processing textual streams by nullifying delimiters within quotes and handling nested groups rather than merely "applying" an idea”. However, as explained in the abstract idea rejection above, the categorization of data into the three defined layers (the first, second, and third layers) can be performed by a human via their mind and/or pen & paper. Furthermore, the parsing of the categorized data can be performed by a human via their mind and/or pen & paper. Because the limitations above closely follow the steps of parsing data, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind and/or pen & paper, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the 2019 PEG. Simply put, a human can categorize DSV data into different layers and then parse such categorized DSV data. Furthermore, the additional element of the receiving of DSV data is simply a data gathering step that is an insignificant extra-solution activity and does not integrate the abstract idea into a practical application. Moreover, the mere nominal recitation of generic computing components such as a non-transitory machine-readable storage medium does not take the claim out of certain methods of mental processes grouping. Therefore, the limitation is directed to an abstract idea.
Applicants argue on Page 07 that “Applicant traverses the rejection of claim 1. The Office Action maps Logview to every element of claims 1-5 and 10, asserting it teaches a non-transitory medium with instructions to receive DSV, categorize characters into three layers, and then parse accordingly, with token/literal/nesting groupers. See Office Action, pp. 8-11. That mapping is incorrect because Logview describes conversion specifiers and general CSV quoting, but nowhere discloses categorizing each character into a selected layer of a plurality of layers as claimed, nor a DSV specific "nesting grouper" that nests characters into nested groups; its bracket discussion relates to Pattern Parser examples (e.g., "[my thread]"), not DSV character-layer grouping (Logview, pp. 41, 55-56, 63-65)”. However, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., a DSV specific “nesting grouper” that nests characters into nested groups”) are not recited in rejected independent claim 1 (rather, the claimed nested grouper is claimed in dependent claims 6 and 8-14). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Moreover, the examiner wishes to refer to Logview which states “The Delimiter Separated Value Parser is for log files in a delimited format, such as comma, tab or pipe separated values. For example, if your log file is also a CSV file” (Page 41), “DSV stands for delimiter separated values. We created the DSV Parser primarily because we needed a "CSV" parser - a log parser that was capable of reading comma separated value (CSV) files… When reading the parser arguments LogViewPlus will assume the first nonspace character which is not part of a conversion specifier is the character which should be used to separate values” (Page 55), “This log message could be parsed using the first conversion pattern given - %d, %t, %p, %c, %m. LogViewPlus will automatically remove all surrounding quotes, but whitespace will be preserved. In the above example, the Message field would contain: This is my "LogMessage", my app is running” (Page 56), and “A conversion specifier is used to represent a single field within a conversion pattern. They are a special character sequence that is used by LogViewPlus to assign meaning to a given part of log entry. Conversion specifiers always begin with a percent character '%'… Conversion specifiers are used by five of the eight default log parsers that ship with LogViewPlus: Pattern Parser, JSON Parser, XML Parser, DSV Parser, and Regex Parser…Date…Thread… Thread names can contain spaces, so you need a nonspaced literal separating the thread from other fields. For example, brackets in "[my thread]" …Priority…Logger…Message…Any Word…Multiple Words… Finally, there are two additional special characters which can be used to process whitespace…Newline…Tab” (Pages 63-65). The examiner further notes that the assigning of meaning to various different elements within a DSV log file teaches the claimed categorizing into multiple layers including the claimed first layer (See data characters), the claimed second layer (See whitespaces (i.e. delimiters in the broadest reasonable interpretation and/or commas (i.e. delimiters in the broadest reasonable interpretation), and the claimed third layer (See grouping symbols of double quotes or brackets).
Applicants argue on Page 07 that “Anticipation requires a single reference to disclose every limitation arranged as in the claim, and the reliance on Logview's high-level parsing description does not satisfy the specific three-layer, per-character categorization followed by parsing. Logview's conversion specifiers operate at field/message level, not at the claimed character-level layering with explicit conditions (e.g., OD VS OLG based on preceding/following delimiters/SOS/EOS). See LogViewPlus, pp. 55, 56, and 63-65)”. However, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “OD VS OLG based on preceding/following delimiters/SOS/EOS”) are not recited in the rejected independent claim 1. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Rather, independent claim 1 recites the limitations of “categorize a character in the DSV data into a selected layer of a plurality of layers, wherein characters in a first layer of the plurality of layers comprise data characters, characters in a second layer of the plurality of layers comprise delimiters, and characters in a third layer of the plurality of layers comprise grouping symbols to group a string of characters into a semantic unit”. As explained above, Logview does teaches the claimed categorizing into the first, second, and third layers as defined in independent claim 1.
Applicants argue on Page 07 that “By contrast, the current application describes use of a three-layer character interpretation model and per-character categorization with parameters (InDef, InLG, NGS) and contextual rules for OD/ED, OLG/ELG, and ONG/ENG, then parses "according to the categorizing" - see specification of the present application 11[00108]-[00112], [00124]-[00146]). Logview does not teach those layered categorizations or the DSV nesting grouper semantics; its CSV quoting is token-level only and its bracket example is pattern parsing, not DSV nesting. Accordingly, the decisive limitations are missing”. However, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “three-layer character interpretation model and per-character categorization with parameters (InDef, InLG, NGS) and contextual rules for OD/ED, OLG/ELG, and ONG/ENG”, “DSV nesting grouper semantics”, and “DSV nesting”) are not recited in the rejected independent claim 1. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Rather, independent claim 1 recites the limitations of “categorize a character in the DSV data into a selected layer of a plurality of layers, wherein characters in a first layer of the plurality of layers comprise data characters, characters in a second layer of the plurality of layers comprise delimiters, and characters in a third layer of the plurality of layers comprise grouping symbols to group a string of characters into a semantic unit”. As explained above, Logview does teaches the claimed categorizing into the first, second, and third layers as defined in independent claim 1. Moreover, the assigning of meaning to various different elements within a DSV log in Logview is subsequently used to parse the specified DSV log.
Conclusion
16. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. PGPUB 2024/0004840 issued to Vora et al. on 04 January 2024. The subject matter disclosed therein is pertinent to that of claims 1-14 (e.g., methods to process DSV data).
U.S. PGPUB 2014/0280254 issued to Feichtner et al. on 18 September 2014. The subject matter disclosed therein is pertinent to that of claims 1-14 (e.g., methods to process DSV data).
17. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mahesh Dwivedi whose telephone number is (571) 272-2731. The examiner can normally be reached on Monday to Friday 8:20 am – 4:40 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached (571) 272-4085. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see 20. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Mahesh Dwivedi
Primary Examiner
Art Unit 2168
February 24, 2026
/MAHESH H DWIVEDI/Primary Examiner, Art Unit 2168