Get the max value from a slice of a JSON Array?
I would like to get the max value within a slice of a Json object (typicaly [1,2,3,5,6,7,9,10]
) which is contained in a field named Data
of the table raw
.
The limits Start
& End
of the slice are contained in an other Json object named Features
contained in a table named features
Here is the input:
CREATE TABLE raw (
id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY
data json
);
INSERT INTO raw (data) VALUES
('[1,2,3,5,6,7,9,10]');
CREATE TABLE features (
id int,
features json
);
INSERT INTO features (id, features) VALUES
(1, '{"Start" : 1, "End": 5}');
The output I would like is 7
, i.e. the max value of the slice [2,3,5,6,7]
Here is what I came up with looking at other posts, but it does not work...
SELECT
R."ID",
F."Features"->>'Start' AS Start,
F."Features"->>'End' AS End,
sort_desc((array(select json_array_elements(R."Data")))[F."Features"->>'Start':F."Features"->>'End'])[1] as maxData
FROM
raw AS R
INNER JOIN
features AS F ON R."ID" = F."ID"
The approximate error message I get is concerning sort_desc
:
No function corresponding to this name or this type of arguments. You
should convert the type of data
postgresql json array postgresql-11
add a comment |
I would like to get the max value within a slice of a Json object (typicaly [1,2,3,5,6,7,9,10]
) which is contained in a field named Data
of the table raw
.
The limits Start
& End
of the slice are contained in an other Json object named Features
contained in a table named features
Here is the input:
CREATE TABLE raw (
id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY
data json
);
INSERT INTO raw (data) VALUES
('[1,2,3,5,6,7,9,10]');
CREATE TABLE features (
id int,
features json
);
INSERT INTO features (id, features) VALUES
(1, '{"Start" : 1, "End": 5}');
The output I would like is 7
, i.e. the max value of the slice [2,3,5,6,7]
Here is what I came up with looking at other posts, but it does not work...
SELECT
R."ID",
F."Features"->>'Start' AS Start,
F."Features"->>'End' AS End,
sort_desc((array(select json_array_elements(R."Data")))[F."Features"->>'Start':F."Features"->>'End'])[1] as maxData
FROM
raw AS R
INNER JOIN
features AS F ON R."ID" = F."ID"
The approximate error message I get is concerning sort_desc
:
No function corresponding to this name or this type of arguments. You
should convert the type of data
postgresql json array postgresql-11
That's a horrible schema for this kind of query. At the very least, usejsonb
, even better don't use json, use anint
.
– Evan Carroll
7 hours ago
add a comment |
I would like to get the max value within a slice of a Json object (typicaly [1,2,3,5,6,7,9,10]
) which is contained in a field named Data
of the table raw
.
The limits Start
& End
of the slice are contained in an other Json object named Features
contained in a table named features
Here is the input:
CREATE TABLE raw (
id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY
data json
);
INSERT INTO raw (data) VALUES
('[1,2,3,5,6,7,9,10]');
CREATE TABLE features (
id int,
features json
);
INSERT INTO features (id, features) VALUES
(1, '{"Start" : 1, "End": 5}');
The output I would like is 7
, i.e. the max value of the slice [2,3,5,6,7]
Here is what I came up with looking at other posts, but it does not work...
SELECT
R."ID",
F."Features"->>'Start' AS Start,
F."Features"->>'End' AS End,
sort_desc((array(select json_array_elements(R."Data")))[F."Features"->>'Start':F."Features"->>'End'])[1] as maxData
FROM
raw AS R
INNER JOIN
features AS F ON R."ID" = F."ID"
The approximate error message I get is concerning sort_desc
:
No function corresponding to this name or this type of arguments. You
should convert the type of data
postgresql json array postgresql-11
I would like to get the max value within a slice of a Json object (typicaly [1,2,3,5,6,7,9,10]
) which is contained in a field named Data
of the table raw
.
The limits Start
& End
of the slice are contained in an other Json object named Features
contained in a table named features
Here is the input:
CREATE TABLE raw (
id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY
data json
);
INSERT INTO raw (data) VALUES
('[1,2,3,5,6,7,9,10]');
CREATE TABLE features (
id int,
features json
);
INSERT INTO features (id, features) VALUES
(1, '{"Start" : 1, "End": 5}');
The output I would like is 7
, i.e. the max value of the slice [2,3,5,6,7]
Here is what I came up with looking at other posts, but it does not work...
SELECT
R."ID",
F."Features"->>'Start' AS Start,
F."Features"->>'End' AS End,
sort_desc((array(select json_array_elements(R."Data")))[F."Features"->>'Start':F."Features"->>'End'])[1] as maxData
FROM
raw AS R
INNER JOIN
features AS F ON R."ID" = F."ID"
The approximate error message I get is concerning sort_desc
:
No function corresponding to this name or this type of arguments. You
should convert the type of data
postgresql json array postgresql-11
postgresql json array postgresql-11
edited 6 hours ago
Evan Carroll
31.5k965213
31.5k965213
asked 10 hours ago
MaximeMaxime
153
153
That's a horrible schema for this kind of query. At the very least, usejsonb
, even better don't use json, use anint
.
– Evan Carroll
7 hours ago
add a comment |
That's a horrible schema for this kind of query. At the very least, usejsonb
, even better don't use json, use anint
.
– Evan Carroll
7 hours ago
That's a horrible schema for this kind of query. At the very least, use
jsonb
, even better don't use json, use an int
.– Evan Carroll
7 hours ago
That's a horrible schema for this kind of query. At the very least, use
jsonb
, even better don't use json, use an int
.– Evan Carroll
7 hours ago
add a comment |
2 Answers
2
active
oldest
votes
You can unnest json array:
Postgres WITH ORDINALITY:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest().
Have a look at this answer of Erwin Brandstetter:
- PostgreSQL unnest() with element number
SELECT
r."ID",
MAX(t.elem::int) MaxElem
FROM
raw r
JOIN
features f
ON f."ID" = r."ID"
JOIN LATERAL
json_array_elements_text(r."Data")
WITH ORDINALITY AS t(elem, n) ON TRUE
WHERE
n >= (f."Features"->>'Start')::int + 1
AND
n <= (f."Features"->>'End')::int + 1
GROUP BY
r."ID";
ID | maxelem
-: | ------:
1 | 7
db<>fiddle here
Or if you prefer to use intarray module:
SELECT
r."ID",
(sort_desc(((ARRAY(SELECT json_array_elements_text(r."Data")))::int)[(f."Features"->>'Start')::int + 1:(f."Features"->>'End')::int + 1]))[1]
FROM
raw r
JOIN
features f
ON f."ID" = r."ID";
rextester here
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
add a comment |
This is all around a horrible schema. You shouldn't be using json
(as compared with jsonb
) at all, ever (practically). If you're querying on the field, it should be jsonb
. In your case, that's still a bad idea though, you likely want an sql array..
CREATE TABLE raw (
raw_id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY,
data int
);
INSERT INTO raw (data) VALUES ('{1,2,3,5,6,7,9,10}');
CREATE TABLE features (
feature_id int REFERENCES raw,
low smallint,
high smallint
);
INSERT INTO features ( feature_id, low, high ) VALUES ( 1, 1, 5 );
Now you can query it like this, note remember sql is 1-based,
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(data[f.low:f.high]);
Also check out the intarray
module, because it'll optimize the above,
CREATE EXTENSION intarray;
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(subarray(data,f.low,f.high-f.low+1))
You can further optimize this if you know you just need the last element of the array.
Note if this is a GIS problem, you're still probably doing it wrong, but at least this method is sane.
1
Now it is said. Nice answer. But I thinksubarray
requiresstart, lenght
.subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f227416%2fget-the-max-value-from-a-slice-of-a-json-array%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can unnest json array:
Postgres WITH ORDINALITY:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest().
Have a look at this answer of Erwin Brandstetter:
- PostgreSQL unnest() with element number
SELECT
r."ID",
MAX(t.elem::int) MaxElem
FROM
raw r
JOIN
features f
ON f."ID" = r."ID"
JOIN LATERAL
json_array_elements_text(r."Data")
WITH ORDINALITY AS t(elem, n) ON TRUE
WHERE
n >= (f."Features"->>'Start')::int + 1
AND
n <= (f."Features"->>'End')::int + 1
GROUP BY
r."ID";
ID | maxelem
-: | ------:
1 | 7
db<>fiddle here
Or if you prefer to use intarray module:
SELECT
r."ID",
(sort_desc(((ARRAY(SELECT json_array_elements_text(r."Data")))::int)[(f."Features"->>'Start')::int + 1:(f."Features"->>'End')::int + 1]))[1]
FROM
raw r
JOIN
features f
ON f."ID" = r."ID";
rextester here
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
add a comment |
You can unnest json array:
Postgres WITH ORDINALITY:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest().
Have a look at this answer of Erwin Brandstetter:
- PostgreSQL unnest() with element number
SELECT
r."ID",
MAX(t.elem::int) MaxElem
FROM
raw r
JOIN
features f
ON f."ID" = r."ID"
JOIN LATERAL
json_array_elements_text(r."Data")
WITH ORDINALITY AS t(elem, n) ON TRUE
WHERE
n >= (f."Features"->>'Start')::int + 1
AND
n <= (f."Features"->>'End')::int + 1
GROUP BY
r."ID";
ID | maxelem
-: | ------:
1 | 7
db<>fiddle here
Or if you prefer to use intarray module:
SELECT
r."ID",
(sort_desc(((ARRAY(SELECT json_array_elements_text(r."Data")))::int)[(f."Features"->>'Start')::int + 1:(f."Features"->>'End')::int + 1]))[1]
FROM
raw r
JOIN
features f
ON f."ID" = r."ID";
rextester here
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
add a comment |
You can unnest json array:
Postgres WITH ORDINALITY:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest().
Have a look at this answer of Erwin Brandstetter:
- PostgreSQL unnest() with element number
SELECT
r."ID",
MAX(t.elem::int) MaxElem
FROM
raw r
JOIN
features f
ON f."ID" = r."ID"
JOIN LATERAL
json_array_elements_text(r."Data")
WITH ORDINALITY AS t(elem, n) ON TRUE
WHERE
n >= (f."Features"->>'Start')::int + 1
AND
n <= (f."Features"->>'End')::int + 1
GROUP BY
r."ID";
ID | maxelem
-: | ------:
1 | 7
db<>fiddle here
Or if you prefer to use intarray module:
SELECT
r."ID",
(sort_desc(((ARRAY(SELECT json_array_elements_text(r."Data")))::int)[(f."Features"->>'Start')::int + 1:(f."Features"->>'End')::int + 1]))[1]
FROM
raw r
JOIN
features f
ON f."ID" = r."ID";
rextester here
You can unnest json array:
Postgres WITH ORDINALITY:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest().
Have a look at this answer of Erwin Brandstetter:
- PostgreSQL unnest() with element number
SELECT
r."ID",
MAX(t.elem::int) MaxElem
FROM
raw r
JOIN
features f
ON f."ID" = r."ID"
JOIN LATERAL
json_array_elements_text(r."Data")
WITH ORDINALITY AS t(elem, n) ON TRUE
WHERE
n >= (f."Features"->>'Start')::int + 1
AND
n <= (f."Features"->>'End')::int + 1
GROUP BY
r."ID";
ID | maxelem
-: | ------:
1 | 7
db<>fiddle here
Or if you prefer to use intarray module:
SELECT
r."ID",
(sort_desc(((ARRAY(SELECT json_array_elements_text(r."Data")))::int)[(f."Features"->>'Start')::int + 1:(f."Features"->>'End')::int + 1]))[1]
FROM
raw r
JOIN
features f
ON f."ID" = r."ID";
rextester here
edited 7 hours ago
answered 8 hours ago
McNetsMcNets
15.4k41858
15.4k41858
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
add a comment |
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
1
1
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
I upvoted for doing what he wanted, but there is something to be said here for not doing this at all lol
– Evan Carroll
6 hours ago
add a comment |
This is all around a horrible schema. You shouldn't be using json
(as compared with jsonb
) at all, ever (practically). If you're querying on the field, it should be jsonb
. In your case, that's still a bad idea though, you likely want an sql array..
CREATE TABLE raw (
raw_id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY,
data int
);
INSERT INTO raw (data) VALUES ('{1,2,3,5,6,7,9,10}');
CREATE TABLE features (
feature_id int REFERENCES raw,
low smallint,
high smallint
);
INSERT INTO features ( feature_id, low, high ) VALUES ( 1, 1, 5 );
Now you can query it like this, note remember sql is 1-based,
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(data[f.low:f.high]);
Also check out the intarray
module, because it'll optimize the above,
CREATE EXTENSION intarray;
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(subarray(data,f.low,f.high-f.low+1))
You can further optimize this if you know you just need the last element of the array.
Note if this is a GIS problem, you're still probably doing it wrong, but at least this method is sane.
1
Now it is said. Nice answer. But I thinksubarray
requiresstart, lenght
.subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
add a comment |
This is all around a horrible schema. You shouldn't be using json
(as compared with jsonb
) at all, ever (practically). If you're querying on the field, it should be jsonb
. In your case, that's still a bad idea though, you likely want an sql array..
CREATE TABLE raw (
raw_id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY,
data int
);
INSERT INTO raw (data) VALUES ('{1,2,3,5,6,7,9,10}');
CREATE TABLE features (
feature_id int REFERENCES raw,
low smallint,
high smallint
);
INSERT INTO features ( feature_id, low, high ) VALUES ( 1, 1, 5 );
Now you can query it like this, note remember sql is 1-based,
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(data[f.low:f.high]);
Also check out the intarray
module, because it'll optimize the above,
CREATE EXTENSION intarray;
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(subarray(data,f.low,f.high-f.low+1))
You can further optimize this if you know you just need the last element of the array.
Note if this is a GIS problem, you're still probably doing it wrong, but at least this method is sane.
1
Now it is said. Nice answer. But I thinksubarray
requiresstart, lenght
.subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
add a comment |
This is all around a horrible schema. You shouldn't be using json
(as compared with jsonb
) at all, ever (practically). If you're querying on the field, it should be jsonb
. In your case, that's still a bad idea though, you likely want an sql array..
CREATE TABLE raw (
raw_id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY,
data int
);
INSERT INTO raw (data) VALUES ('{1,2,3,5,6,7,9,10}');
CREATE TABLE features (
feature_id int REFERENCES raw,
low smallint,
high smallint
);
INSERT INTO features ( feature_id, low, high ) VALUES ( 1, 1, 5 );
Now you can query it like this, note remember sql is 1-based,
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(data[f.low:f.high]);
Also check out the intarray
module, because it'll optimize the above,
CREATE EXTENSION intarray;
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(subarray(data,f.low,f.high-f.low+1))
You can further optimize this if you know you just need the last element of the array.
Note if this is a GIS problem, you're still probably doing it wrong, but at least this method is sane.
This is all around a horrible schema. You shouldn't be using json
(as compared with jsonb
) at all, ever (practically). If you're querying on the field, it should be jsonb
. In your case, that's still a bad idea though, you likely want an sql array..
CREATE TABLE raw (
raw_id int PRIMARY KEY
GENERATED BY DEFAULT AS IDENTITY,
data int
);
INSERT INTO raw (data) VALUES ('{1,2,3,5,6,7,9,10}');
CREATE TABLE features (
feature_id int REFERENCES raw,
low smallint,
high smallint
);
INSERT INTO features ( feature_id, low, high ) VALUES ( 1, 1, 5 );
Now you can query it like this, note remember sql is 1-based,
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(data[f.low:f.high]);
Also check out the intarray
module, because it'll optimize the above,
CREATE EXTENSION intarray;
SELECT max(unnest)
FROM raw
CROSS JOIN features AS f
CROSS JOIN LATERAL unnest(subarray(data,f.low,f.high-f.low+1))
You can further optimize this if you know you just need the last element of the array.
Note if this is a GIS problem, you're still probably doing it wrong, but at least this method is sane.
edited 6 hours ago
answered 7 hours ago
Evan CarrollEvan Carroll
31.5k965213
31.5k965213
1
Now it is said. Nice answer. But I thinksubarray
requiresstart, lenght
.subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
add a comment |
1
Now it is said. Nice answer. But I thinksubarray
requiresstart, lenght
.subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
1
1
Now it is said. Nice answer. But I think
subarray
requires start, lenght
. subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Now it is said. Nice answer. But I think
subarray
requires start, lenght
. subarray(data,f.low,f.high-f.low+1)
– McNets
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
Good catch! @McNets
– Evan Carroll
6 hours ago
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f227416%2fget-the-max-value-from-a-slice-of-a-json-array%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
That's a horrible schema for this kind of query. At the very least, use
jsonb
, even better don't use json, use anint
.– Evan Carroll
7 hours ago